Migration to 9.0.0 not updating cache.ADAPTER
from memcache
to redis-cluster
#468
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hello, after upgrade from
8.x.x
to9.0.0
I have problems when I've removedmemcache
and switched toredis-cluster
.This is the log from init-app-ini
But the values are not updated in file
init-file-generated
Those are my values of chart
values.yaml
seeing same issue, need to maually update the app.ini 😕
Sounds like either #453 or the generic one #356.
This is due to #356 and friends.
I think it should work fine for new installations but existing ones might need to set the following options explicitly until the above is fixed:
Does this help to resolve the issue?
Sorry to barge into this thread, but I'm also having a minor issue with the new redis setup.
I love it that we can now use our own redis setup for the cache, saving the no longer needed memcached pod. However, in our situation, our redis deployment has authentication set up and we'd like to be able to restrict any redis clients/users to a certain keyspace.
https://redis.io/docs/manual/keyspace/
https://redis.io/docs/management/security/acl/#key-permissions
It would be nice to be able to either know what Gitea uses for a keyspace or even be able to set the keyspace being used ourselves.
If this sounds viable, should I start a new suggestion issue about it? Or at a minimum, can it be documented what keyspace/ key schema gitea uses?
Edit: I know I could go searching through the code to find the keyspace myself, but I'm a bit time restricted at the moment. Sorry.
Scott
@smolinari Redis support has been there before this release, for both cluster and non-cluster versions.
I am not familiar with Redis auth but the chart shouldn't limit you here. Maybe it works via additional options in the connection string as documented here?
I think your request might be better suited in upstream Gitea than here as the helm chart does not anything special to redis auth, i.e. the change in v9.0.0 just switched the default to
redis-cluster
but all configuration options WRT to redis are the same as before.Happy to document it but I think the information should then also go to the official Gitea docs and not the helm docs. Let's move this discussion to a new issue as this issue is actually about something else.
@pat-s the
provider
andadapter
are wrong in75893ad9c6
. They should beredis-cluster
instead ofredis
, no?Correct, but for some reasons
redis
also seems to work as long as the connection string is usingredis+cluster
. I would need to look into it in more detail.queue
also has noredis-cluster
key but probably should have one.Thanks for mentioning!
19841604f7
I don't know why, when I add those values in
values.yaml
the fileapp.ini
is still not updated :/Do you have them below
gitea.config
?Yes, that didn't help. the
gitea-inline-config
secret is updated, but theapp.ini
doesn't get updated. i had to manually do that for existing changed values.eg, i'v switched issue indexer to meilisearch in
gitea.config
invalues.yml
, but that change wasn't passed to theapp.ini
Yes, this looks like this
I had to manually change
app.ini
file.Hmm strange. I don't have issues changing values which then populate to
app.ini
- and I am using the current release version.If you set values explicitly, it should definitely work and this is different to #356. Need to look into it...
Thanks for the reply. I've made a suggestion issue on the Github Gitea repo. I hope that is the right place: 😁
Scott
@mmalyska @viceice
Caused by an upstream bug in 1.20.0: https://github.com/go-gitea/gitea/issues/25924
You have the following options:
1.20-nightly
INSTALL_LOCK = true
manually fromapp.ini
and setGITEA__security__INSTALL_LOCK
as an env var in the pod viaadditionalConfigFromEnvs
@mmalyska @viceice This should be fixed now in v9.0.4.
Closing since https://gitea.com/gitea/helm-chart/issues/468#issuecomment-744807 should solve this (which is also linked in the upgrading notes).