Upgrade 9.0.0 to 9.0.1 fails on redis step #470

Closed
opened 2023-07-19 09:12:53 +00:00 by bengtfredh · 18 comments
bengtfredh commented 2023-07-19 09:12:53 +00:00 (Migrated from gitea.com)

Upgrade 9.0.0 to 9.0.1 fails with this message. Had same issue upgrade from 8.x to 9.0.0. But ended up reinstall gitea, and that worked. Now when I try to upgrade from 9.0.0 to 9.0.1 I get this error:

Error: UPGRADE FAILED: execution error at (gitea/charts/redis-cluster/templates/NOTES.txt:115:6): 
PASSWORDS ERROR: You must provide your current passwords when upgrading the release.
                 Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims.
                 Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases

    'password' must not be empty, please add '--set password=$REDIS_PASSWORD' to the command. To get the current value:

        export REDIS_PASSWORD=$(kubectl get secret --namespace "gitea" gitea-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d)
Upgrade 9.0.0 to 9.0.1 fails with this message. Had same issue upgrade from 8.x to 9.0.0. But ended up reinstall gitea, and that worked. Now when I try to upgrade from 9.0.0 to 9.0.1 I get this error: ```sh Error: UPGRADE FAILED: execution error at (gitea/charts/redis-cluster/templates/NOTES.txt:115:6): PASSWORDS ERROR: You must provide your current passwords when upgrading the release. Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims. Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases 'password' must not be empty, please add '--set password=$REDIS_PASSWORD' to the command. To get the current value: export REDIS_PASSWORD=$(kubectl get secret --namespace "gitea" gitea-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d) ```
bengtfredh commented 2023-07-19 09:21:06 +00:00 (Migrated from gitea.com)

Tried to follow instructions with no luck.
Got it to run with this command:

helm -n gitea upgrade gitea gitea/gitea --reuse-values --set redis-cluster.existingSecret=gitea-redis-cluster --set redis-cluster.existingSecretPasswordKey=redis-password

@pat-s

Tried to follow instructions with no luck. Got it to run with this command: ```sh helm -n gitea upgrade gitea gitea/gitea --reuse-values --set redis-cluster.existingSecret=gitea-redis-cluster --set redis-cluster.existingSecretPasswordKey=redis-password ``` @pat-s
pat-s commented 2023-07-19 09:21:57 +00:00 (Migrated from gitea.com)

Thanks for reporting.

The redis-cluster release didnt' change between these versions so I am a bit surprised about this complain. But I guess it's because redis-cluster is being removed and then re-deployed for new Gitea chart versions and the password is somehow not kept persistent.

Did the update also create new PVs for redis-cluster? We might need to force users to supply a fixed PW by default for redis-cluster. I can try to replicate this later when I find some time.

Can you try to recover the PW and add it persistently via the redis-cluster values?

Thanks for reporting. The `redis-cluster` release didnt' change between these versions so I am a bit surprised about this complain. But I guess it's because `redis-cluster` is being removed and then re-deployed for new Gitea chart versions and the password is somehow not kept persistent. Did the update also create new PVs for `redis-cluster`? We might need to force users to supply a fixed PW by default for `redis-cluster`. I can try to replicate this later when I find some time. Can you try to recover the PW and add it persistently via the `redis-cluster` values?
bengtfredh commented 2023-07-19 09:30:17 +00:00 (Migrated from gitea.com)

@pat-s It did not work to set PW persistently via the redis-cluster values. Pointing to the secret worked, look at my update. ^^^

@pat-s It did not work to set PW persistently via the redis-cluster values. Pointing to the secret worked, look at my update. ^^^
bengtfredh commented 2023-07-19 09:45:10 +00:00 (Migrated from gitea.com)

@pat-s Still not able to upgrade, but different error:

Error: UPGRADE FAILED: cannot patch "gitea-shared-storage" with kind PersistentVolumeClaim: PersistentVolumeClaim "gitea-shared-storage" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes:      {"ReadWriteOnce"},
  	Selector:         nil,
  	Resources:        {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}},
- 	VolumeName:       "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f",
+ 	VolumeName:       "",

I guess this is because of this:
https://gitea.com/gitea/helm-chart/src/branch/main/templates/gitea/pvc.yaml#:~:text=%7B%7B%2D%20end%20%7D%7D-,volumeName%3A%20%22%22,-resources%3A

@pat-s Still not able to upgrade, but different error: ```sh Error: UPGRADE FAILED: cannot patch "gitea-shared-storage" with kind PersistentVolumeClaim: PersistentVolumeClaim "gitea-shared-storage" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}}, - VolumeName: "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f", + VolumeName: "", ``` I guess this is because of this: https://gitea.com/gitea/helm-chart/src/branch/main/templates/gitea/pvc.yaml#:~:text=%7B%7B%2D%20end%20%7D%7D-,volumeName%3A%20%22%22,-resources%3A
pat-s commented 2023-07-19 09:57:22 +00:00 (Migrated from gitea.com)

Did you set persistence.create=false? If you had a PVC already, you need to point it to the existing one instead of creating a new one. A new one is created every time you remove the helm install completely and install it again (instead of upgrading) (which I assume you did?).

The volumeName: "" default shouldn't be the issue.

It did not work to set PW persistently via the redis-cluster values. Pointing to the secret worked, look at my update. ^^^

Not even during a "real" fresh install? I.e. if you remove the PVCs and PV and set it right from the start? If you set it while you still have an old PV around with a possibly old value of the PW than it might not work.

Did you set `persistence.create=false`? If you had a PVC already, you need to point it to the existing one instead of creating a new one. A new one is created every time you remove the helm install completely and install it again (instead of upgrading) (which I assume you did?). The `volumeName: ""` default shouldn't be the issue. > It did not work to set PW persistently via the redis-cluster values. Pointing to the secret worked, look at my update. ^^^ Not even during a "real" fresh install? I.e. if you remove the PVCs and PV and set it right from the start? If you set it while you still have an old PV around with a possibly old value of the PW than it might not work.
bengtfredh commented 2023-07-19 09:59:39 +00:00 (Migrated from gitea.com)

I got it to run with:

helm -n gitea upgrade gitea gitea/gitea --reuse-values --set redis-cluster.existingSecret=gitea-redis-cluster --set redis-cluster.existingSecretPasswordKey=redis-password --set persistence.create=false

The logic need to change. Because on install you may want persistence.create=true, but then you need to change to persistence.create=false after install and before upgrade. Values should be idempotent for minor versions and patches.

I got it to run with: ```sh helm -n gitea upgrade gitea gitea/gitea --reuse-values --set redis-cluster.existingSecret=gitea-redis-cluster --set redis-cluster.existingSecretPasswordKey=redis-password --set persistence.create=false ``` The logic need to change. Because on install you may want persistence.create=true, but then you need to change to persistence.create=false after install and before upgrade. Values should be idempotent for minor versions and patches.
pat-s commented 2023-07-19 10:05:44 +00:00 (Migrated from gitea.com)

The logic need to change. Because on install you may want persistence.create=true, but then you need to change to persistence.create=false after install and before upgrade. Values should be idempotent for minor versions and patches.

It shouldn't be a problem during an upgrade. The logic I've implemented here is used in other charts in the same way for which a helm upgrade successfully reused an existing PV which was created with persistence.create=true beforehand. The PV is only changed if the PVC is changed and creates a new PV in turn - but the PVC should stay untouched during upgrades.

Of course it's always possible I did a mistake somewhere...

Did you do a completely new installation with v9.0.0? I.e. didn't you have any PV/PVC before?

> The logic need to change. Because on install you may want persistence.create=true, but then you need to change to persistence.create=false after install and before upgrade. Values should be idempotent for minor versions and patches. It shouldn't be a problem during an upgrade. The logic I've implemented here is used in other charts in the same way for which a `helm upgrade` successfully reused an existing PV which was created with `persistence.create=true` beforehand. The PV is only changed if the PVC is changed and creates a new PV in turn - but the PVC should stay untouched during upgrades. Of course it's always possible I did a mistake somewhere... Did you do a completely new installation with v9.0.0? I.e. didn't you have any PV/PVC before?
bengtfredh commented 2023-07-19 10:45:42 +00:00 (Migrated from gitea.com)

@pat-s Yes, I did a new fresh install of 9.0.0 with no PV/PVC. When I ran gitea install it created a volumeclaim as expected and cluster created a PV and connected it to the PVC:

When helm try to upgrade, it will discover that the PVC VolumeName is changed from "" to "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f".

- 	VolumeName:       "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f",
+ 	VolumeName:       "",

Then helm will try to update PVC VolumeName to "".

@pat-s Yes, I did a new fresh install of 9.0.0 with no PV/PVC. When I ran gitea install it created a volumeclaim as expected and cluster created a PV and connected it to the PVC: When helm try to upgrade, it will discover that the PVC VolumeName is changed from "" to "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f". ```sh - VolumeName: "pvc-8ac51f2d-86c1-4ad7-a973-6e08bdb8e81f", + VolumeName: "", ``` Then helm will try to update PVC VolumeName to "".
pat-s commented 2023-07-19 12:35:04 +00:00 (Migrated from gitea.com)

Normally helm shouldn't touch the PVC after creation, even during a helm upgrade 🤔

I can't reproduce your error message but let's try if #470 helps here. It may not look like a real change WRT to the defaults but I have some hope that exposing persistence.volumeName in the value might change the upgrade behavior. Let's see (at least I can't trigger the error you're seeing with a fresh install).

Normally helm shouldn't touch the PVC after creation, even during a `helm upgrade` 🤔 I can't reproduce your error message but let's try if #470 helps here. It may not look like a real change WRT to the defaults but I have some hope that exposing `persistence.volumeName` in the value might change the upgrade behavior. Let's see (at least I can't trigger the error you're seeing with a fresh install).
pat-s commented 2023-07-19 15:11:57 +00:00 (Migrated from gitea.com)

I think we're missing

  annotations:
    helm.sh/resource-policy: keep

which should keep the pvc from being removed on upgrade and hence the change request of moving back to volumeName: "".

I think we're missing ``` annotations: helm.sh/resource-policy: keep ``` which should keep the pvc from being removed on upgrade and hence the change request of moving back to `volumeName: ""`.
pat-s commented 2023-07-19 19:59:50 +00:00 (Migrated from gitea.com)

Can you try with v9.0.2 and see if this fixes your problems?

Can you try with v9.0.2 and see if this fixes your problems?
bengtfredh commented 2023-07-19 22:15:46 +00:00 (Migrated from gitea.com)
redis-cluster:
  usePassword: false

fix issue with redis

Get same error with PVC. Fresh install 9.0.0 - then upgrade to 9.0.2. Your change will only work with statically provisioned PV and you define the name in values. I think most clusters runs with dynamically provisioned PV, and will get an autogenerated name for the PV. Or if you have existing volume claim, and set persistence.create=false.

Error: UPGRADE FAILED: cannot patch "gitea-shared-storage" with kind PersistentVolumeClaim: PersistentVolumeClaim "gitea-shared-storage" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes:      {"ReadWriteOnce"},
  	Selector:         nil,
  	Resources:        {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}},
- 	VolumeName:       "pvc-5350fe9d-39e1-46e8-bc89-27ef84dcd587",
+ 	VolumeName:       "",
  	StorageClassName: &"standard",
  	VolumeMode:       &"Filesystem"
```yaml redis-cluster: usePassword: false ``` fix issue with redis Get same error with PVC. Fresh install 9.0.0 - then upgrade to 9.0.2. Your change will only work with statically provisioned PV and you define the name in values. I think most clusters runs with dynamically provisioned PV, and will get an autogenerated name for the PV. Or if you have existing volume claim, and set persistence.create=false. ``` Error: UPGRADE FAILED: cannot patch "gitea-shared-storage" with kind PersistentVolumeClaim: PersistentVolumeClaim "gitea-shared-storage" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}}, - VolumeName: "pvc-5350fe9d-39e1-46e8-bc89-27ef84dcd587", + VolumeName: "", StorageClassName: &"standard", VolumeMode: &"Filesystem" ```
pat-s commented 2023-07-20 06:14:15 +00:00 (Migrated from gitea.com)

Your change will only work with statically provisioned PV and you define the name in values. I think most clusters runs with dynamically provisioned PV, and will get an autogenerated name for the PV. Or if you have existing volume claim, and set persistence.create=false

The goal is to also make this work for dynamically provisioned PVs. I can't replicate the issue when being on 9.0.2 and upgrading. Maybe it only occurs when upgrading from 9.0.0 to 9.0.2 as the new annotation is not yet in place?

What happens if you add the new annotation manually to the existing gitea-shared-storage PVC? I assume it is not yet in place?

> Your change will only work with statically provisioned PV and you define the name in values. I think most clusters runs with dynamically provisioned PV, and will get an autogenerated name for the PV. Or if you have existing volume claim, and set persistence.create=false The goal is to also make this work for dynamically provisioned PVs. I can't replicate the issue when being on 9.0.2 and upgrading. Maybe it only occurs when upgrading from 9.0.0 to 9.0.2 as the new annotation is not yet in place? What happens if you add the new annotation manually to the existing `gitea-shared-storage` PVC? I assume it is not yet in place?
bengtfredh commented 2023-07-20 08:38:54 +00:00 (Migrated from gitea.com)

Technically it still fails with 9.0.3 for me. I see that helm actually continue, redis and pods get restarted. But get ERROR output from helm. And helm list show status failed.

I tried to add annotation, no difference.

I changed v9.0.0

diff --git a/templates/gitea/pvc.yaml b/templates/gitea/pvc.yaml
index d84ecc3..f24895c 100644
--- a/templates/gitea/pvc.yaml
+++ b/templates/gitea/pvc.yaml
@@ -17,8 +17,7 @@ spec:
   {{- if .Values.persistence.storageClass }}
   storageClassName: {{ .Values.persistence.storageClass }}
   {{- end }}
-  volumeName: ""
   resources:
     requests:
       storage: {{ .Values.persistence.size }}
-{{- end }}
\ No newline at end of file
+{{- end }}

and installed new and fresh in a kind cluster. Then I did same change in v9.0.3 and did upgrade. No error. And helm list show status deployed.

Technically it still fails with 9.0.3 for me. I see that helm actually continue, redis and pods get restarted. But get ERROR output from helm. And helm list show status failed. I tried to add annotation, no difference. I changed v9.0.0 ```yaml diff --git a/templates/gitea/pvc.yaml b/templates/gitea/pvc.yaml index d84ecc3..f24895c 100644 --- a/templates/gitea/pvc.yaml +++ b/templates/gitea/pvc.yaml @@ -17,8 +17,7 @@ spec: {{- if .Values.persistence.storageClass }} storageClassName: {{ .Values.persistence.storageClass }} {{- end }} - volumeName: "" resources: requests: storage: {{ .Values.persistence.size }} -{{- end }} \ No newline at end of file +{{- end }} ``` and installed new and fresh in a kind cluster. Then I did same change in v9.0.3 and did upgrade. No error. And helm list show status deployed.
pat-s commented 2023-07-20 10:20:42 +00:00 (Migrated from gitea.com)

I changed v9.0.0

My request from the previous comment was to add

  annotations:
    helm.sh/resource-policy: keep

to your existing PVC resource. In the diff you're editing the pvc.yaml of the gitea helm chart (?)
There is no needed to change the template of the helm chart.

Technically it still fails with 9.0.3 for me. I see that helm actually continue, redis and pods get restarted. But get ERROR output from helm. And helm list show status failed.

What do you mean by "technically it still fails but helm actually continues"?

> I changed v9.0.0 My request from the previous comment was to add ``` annotations: helm.sh/resource-policy: keep ``` to your existing PVC resource. In the diff you're editing the `pvc.yaml` of the gitea helm chart (?) There is no needed to change the template of the helm chart. > Technically it still fails with 9.0.3 for me. I see that helm actually continue, redis and pods get restarted. But get ERROR output from helm. And helm list show status failed. What do you mean by "technically it still fails but helm actually continues"?
githubcdr commented 2023-07-20 14:51:00 +00:00 (Migrated from gitea.com)

Hi,

  • redis breaking
  • pvcname breaking
  • nightly containers replacing production releases

Not sure what to think about this, but very user unfreindly.

And now for some reason session handler got assigned to redis, which is not configured so Gitea won't even start any more :(

Hi, - redis breaking - pvcname breaking - nightly containers replacing production releases Not sure what to think about this, but very user unfreindly. And now for some reason session handler got assigned to redis, which is not configured so Gitea won't even start any more :(
pat-s commented 2023-07-20 15:19:13 +00:00 (Migrated from gitea.com)

Not sure what to think about this, but very user unfreindly.

You can always fork and do your own thing. Other than that, please refrain from such comments which do not help in threads that have no relation.

> Not sure what to think about this, but very user unfreindly. You can always fork and do your own thing. Other than that, please refrain from such comments which do not help in threads that have no relation.
bengtfredh commented 2023-07-20 17:32:38 +00:00 (Migrated from gitea.com)

Main issue with redis is fixed.

I will run with:

persistence:
      create: false

Thank you.

Main issue with redis is fixed. I will run with: ```yaml persistence: create: false ``` Thank you.
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: lunny/helm-chart#470
No description provided.