Deployment with existing redis fails #730

Closed
opened 2024-11-25 19:07:44 +00:00 by pfaelzerchen · 6 comments
pfaelzerchen commented 2024-11-25 19:07:44 +00:00 (Migrated from gitea.com)

I tried to migrate to a separated redis-sentinel deployment and no longer using the subchart. Therefore, I deactivated both the redis-cluster and redis subchart and added session.PROVIDER_CONFIG, cache.HOST and queue.CONN_STR to the configuration using a redis+sentinel-connection string from the cheat sheet as well as

  session:
    PROVIDER: redis
  cache:
    ENABLED: true
    ADAPTER: redis
  queue:
    TYPE: redis

However, gitea keeps using redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s& regardless of my configuration. And as this service is gone, gitea won't be able to start.

The connection strings are configured via an additional secret as they include a password in the connection string.

Did I miss something or is there a bug in the chart?

I tried to migrate to a separated redis-sentinel deployment and no longer using the subchart. Therefore, I deactivated both the redis-cluster and redis subchart and added `session.PROVIDER_CONFIG`, `cache.HOST` and `queue.CONN_STR` to the configuration using a redis+sentinel-connection string from the cheat sheet as well as ```yaml session: PROVIDER: redis cache: ENABLED: true ADAPTER: redis queue: TYPE: redis ``` However, gitea keeps using `redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&` regardless of my configuration. And as this service is gone, gitea won't be able to start. The connection strings are configured via an additional secret as they include a password in the connection string. Did I miss something or is there a bug in the chart?
justusbunsi commented 2024-11-25 19:18:15 +00:00 (Migrated from gitea.com)

Sounds like https://gitea.com/gitea/helm-chart/issues/356 to me regarding not cleaning up removed settings from app.ini. Could you run helm template with your values and check the rendered resources for the incorrect value?

Sounds like https://gitea.com/gitea/helm-chart/issues/356 to me regarding not cleaning up removed settings from app.ini. Could you run `helm template` with your values and check the rendered resources for the incorrect value?
pfaelzerchen commented 2024-11-27 08:55:22 +00:00 (Migrated from gitea.com)

After running helm template the rendered secret git-inline-config indeed shows the old values instead of the newly configured ones. It does not make a difference if the new configuration is provided via secret or in the values.yaml itself. It is ignored.

After running `helm template` the rendered secret `git-inline-config` indeed shows the old values instead of the newly configured ones. It does not make a difference if the new configuration is provided via secret or in the values.yaml itself. It is ignored.
justusbunsi commented 2024-11-27 21:28:20 +00:00 (Migrated from gitea.com)

Could you share your values.yaml? I'd like to investigate more deeply.

Could you share your values.yaml? I'd like to investigate more deeply.
pfaelzerchen commented 2024-11-28 11:17:11 +00:00 (Migrated from gitea.com)

Sure. I attached my values.yaml and the secret with the security-related-parts of the configuration.

Sure. I attached my values.yaml and the secret with the security-related-parts of the configuration.
justusbunsi commented 2024-11-28 13:56:25 +00:00 (Migrated from gitea.com)

Looking at your values, you've still enabled redis-cluster. If I set this to false, the default app.ini will be properly replaced with the sentinel redis settings:

 <............>
 [session]
-PROVIDER_CONFIG = redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
+PROVIDER_CONFIG = redis+sentinel://:password@redis-headless.gitea.svc.cluster.local:26379/0?pool_size=100&idle_timeout=180s&master_name=gitea-sentinel
 PROVIDER = redis

 [repository]
 ROOT = /data/git/gitea-repositories

 [cache]
-HOST = redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
+HOST = redis+sentinel://:password@redis-headless.gitea.svc.cluster.local:26379/0?pool_size=100&idle_timeout=180s&master_name=gitea-sentinel
 ADAPTER = redis
+ENABLED = true

 [security]
 INTERNAL_TOKEN = <............>
@@ -42,10 +48,40 @@

 [queue]
 TYPE = redis
-CONN_STR = redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
+CONN_STR = redis+sentinel://:password@redis-headless.gitea.svc.cluster.local:26379/0?pool_size=100&idle_timeout=180s&master_name=gitea-sentinel

 [metrics]
 ENABLED = false
 <............>
image.png
Looking at your values, you've still enabled `redis-cluster`. If I set this to false, the default app.ini will be properly replaced with the sentinel redis settings: ```diff <............> [session] -PROVIDER_CONFIG = redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s& +PROVIDER_CONFIG = redis+sentinel://:password@redis-headless.gitea.svc.cluster.local:26379/0?pool_size=100&idle_timeout=180s&master_name=gitea-sentinel PROVIDER = redis [repository] ROOT = /data/git/gitea-repositories [cache] -HOST = redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s& +HOST = redis+sentinel://:password@redis-headless.gitea.svc.cluster.local:26379/0?pool_size=100&idle_timeout=180s&master_name=gitea-sentinel ADAPTER = redis +ENABLED = true [security] INTERNAL_TOKEN = <............> @@ -42,10 +48,40 @@ [queue] TYPE = redis -CONN_STR = redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s& +CONN_STR = redis+sentinel://:password@redis-headless.gitea.svc.cluster.local:26379/0?pool_size=100&idle_timeout=180s&master_name=gitea-sentinel [metrics] ENABLED = false <............> ``` <img width="269" alt="image.png" src="attachments/6f3fa7c2-3ae6-4f2a-b84f-48e51ba47754">
pfaelzerchen commented 2024-11-28 19:02:48 +00:00 (Migrated from gitea.com)

Thanks for your feedback. When redeploying with disabled redis-cluster I ran into the same problem. After deleting the ressource group in ArgoCD (deleting the pod was not sufficient) it updated the configuration and made a switch to the new redis. So after all it worked now.

Thanks for your feedback. When redeploying with disabled redis-cluster I ran into the same problem. After deleting the ressource group in ArgoCD (deleting the pod was not sufficient) it updated the configuration and made a switch to the new redis. So after all it worked now.
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: lunny/helm-chart#730
No description provided.