Upgrade 9.0.0 to 9.0.1 fails on redis step #470
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Upgrade 9.0.0 to 9.0.1 fails with this message. Had same issue upgrade from 8.x to 9.0.0. But ended up reinstall gitea, and that worked. Now when I try to upgrade from 9.0.0 to 9.0.1 I get this error:
Tried to follow instructions with no luck.
Got it to run with this command:
@pat-s
Thanks for reporting.
The
redis-cluster
release didnt' change between these versions so I am a bit surprised about this complain. But I guess it's becauseredis-cluster
is being removed and then re-deployed for new Gitea chart versions and the password is somehow not kept persistent.Did the update also create new PVs for
redis-cluster
? We might need to force users to supply a fixed PW by default forredis-cluster
. I can try to replicate this later when I find some time.Can you try to recover the PW and add it persistently via the
redis-cluster
values?@pat-s It did not work to set PW persistently via the redis-cluster values. Pointing to the secret worked, look at my update. ^^^
@pat-s Still not able to upgrade, but different error:
I guess this is because of this:
https://gitea.com/gitea/helm-chart/src/branch/main/templates/gitea/pvc.yaml#:~:text=%7B%7B%2D%20end%20%7D%7D-,volumeName%3A%20%22%22,-resources%3A
Did you set
persistence.create=false
? If you had a PVC already, you need to point it to the existing one instead of creating a new one. A new one is created every time you remove the helm install completely and install it again (instead of upgrading) (which I assume you did?).The
volumeName: ""
default shouldn't be the issue.Not even during a "real" fresh install? I.e. if you remove the PVCs and PV and set it right from the start? If you set it while you still have an old PV around with a possibly old value of the PW than it might not work.
I got it to run with:
The logic need to change. Because on install you may want persistence.create=true, but then you need to change to persistence.create=false after install and before upgrade. Values should be idempotent for minor versions and patches.
It shouldn't be a problem during an upgrade. The logic I've implemented here is used in other charts in the same way for which a
helm upgrade
successfully reused an existing PV which was created withpersistence.create=true
beforehand. The PV is only changed if the PVC is changed and creates a new PV in turn - but the PVC should stay untouched during upgrades.Of course it's always possible I did a mistake somewhere...
Did you do a completely new installation with v9.0.0? I.e. didn't you have any PV/PVC before?
@pat-s Yes, I did a new fresh install of 9.0.0 with no PV/PVC. When I ran gitea install it created a volumeclaim as expected and cluster created a PV and connected it to the PVC:
When helm try to upgrade, it will discover that the PVC VolumeName is changed from "" to "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f".
Then helm will try to update PVC VolumeName to "".
Normally helm shouldn't touch the PVC after creation, even during a
helm upgrade
🤔I can't reproduce your error message but let's try if #470 helps here. It may not look like a real change WRT to the defaults but I have some hope that exposing
persistence.volumeName
in the value might change the upgrade behavior. Let's see (at least I can't trigger the error you're seeing with a fresh install).I think we're missing
which should keep the pvc from being removed on upgrade and hence the change request of moving back to
volumeName: ""
.Can you try with v9.0.2 and see if this fixes your problems?
fix issue with redis
Get same error with PVC. Fresh install 9.0.0 - then upgrade to 9.0.2. Your change will only work with statically provisioned PV and you define the name in values. I think most clusters runs with dynamically provisioned PV, and will get an autogenerated name for the PV. Or if you have existing volume claim, and set persistence.create=false.
The goal is to also make this work for dynamically provisioned PVs. I can't replicate the issue when being on 9.0.2 and upgrading. Maybe it only occurs when upgrading from 9.0.0 to 9.0.2 as the new annotation is not yet in place?
What happens if you add the new annotation manually to the existing
gitea-shared-storage
PVC? I assume it is not yet in place?Technically it still fails with 9.0.3 for me. I see that helm actually continue, redis and pods get restarted. But get ERROR output from helm. And helm list show status failed.
I tried to add annotation, no difference.
I changed v9.0.0
and installed new and fresh in a kind cluster. Then I did same change in v9.0.3 and did upgrade. No error. And helm list show status deployed.
My request from the previous comment was to add
to your existing PVC resource. In the diff you're editing the
pvc.yaml
of the gitea helm chart (?)There is no needed to change the template of the helm chart.
What do you mean by "technically it still fails but helm actually continues"?
Hi,
Not sure what to think about this, but very user unfreindly.
And now for some reason session handler got assigned to redis, which is not configured so Gitea won't even start any more :(
You can always fork and do your own thing. Other than that, please refrain from such comments which do not help in threads that have no relation.
Main issue with redis is fixed.
I will run with:
Thank you.