Deployment with existing redis fails #730
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I tried to migrate to a separated redis-sentinel deployment and no longer using the subchart. Therefore, I deactivated both the redis-cluster and redis subchart and added
session.PROVIDER_CONFIG
,cache.HOST
andqueue.CONN_STR
to the configuration using a redis+sentinel-connection string from the cheat sheet as well asHowever, gitea keeps using
redis+cluster://:@gitea-redis-cluster-headless.gitea.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
regardless of my configuration. And as this service is gone, gitea won't be able to start.The connection strings are configured via an additional secret as they include a password in the connection string.
Did I miss something or is there a bug in the chart?
Sounds like https://gitea.com/gitea/helm-chart/issues/356 to me regarding not cleaning up removed settings from app.ini. Could you run
helm template
with your values and check the rendered resources for the incorrect value?After running
helm template
the rendered secretgit-inline-config
indeed shows the old values instead of the newly configured ones. It does not make a difference if the new configuration is provided via secret or in the values.yaml itself. It is ignored.Could you share your values.yaml? I'd like to investigate more deeply.
Sure. I attached my values.yaml and the secret with the security-related-parts of the configuration.
Looking at your values, you've still enabled
redis-cluster
. If I set this to false, the default app.ini will be properly replaced with the sentinel redis settings:Thanks for your feedback. When redeploying with disabled redis-cluster I ran into the same problem. After deleting the ressource group in ArgoCD (deleting the pod was not sufficient) it updated the configuration and made a switch to the new redis. So after all it worked now.