Bad Request: invalid CSRF token on HA Deployment #661
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I am not sure if this is a problem with the helm chart, but no one else seems to be saying anything about it. So I assume it is a config issue.
This only seems to occur when doing something in the
/admin
path. I can create repos, and work in that regard, but if I do anything under/admin
like make an organization, it throws the above error.values.yaml:
chart.yaml
Looks like you are using Gitea Helm Chart as a Subchart in a custom Wrapper Chart. Right?
If that's the case, can you check if this still occurs when using Gitea Helm Chart natively?
And, just to be sure: You are using a Gitea image, not Forgejo image. Right? The commented
gitea.config.APP_NAME
got me suspicious. We cannot guarantee a working Helm Chart with Forgejo images.The sub chart shouldn’t impact anything, the values just get an indent, but the chart shouldn’t be any wiser. Switching from a sub chart is a little difficult since this is running from an argocd app, I can try but running it outside that, I’m not sure what exactly would change that would cause a csrf token error since everything else seems to work.
The Forgejo comments were me trying it to see if the issue was limited to one or both. I have both gitea and forgejo running but independently just to test them.
But yes, all gitea images, no Forgejo images. Otherwise it’d be in the values.yaml above.
I'll try to reproduce the error as well. Thanks for your reply.
So, I was playing around more, I see the session token getting put into the DB, but it seems only one pod will actually care about that? I added:
To do sticky sessions, it works, to get around this, but I feel like there is something sort of wrong related to the CSRF token validation if it has to come from the same pod that generated it. Maybe I am missing something though. Should this issue switch over to the gitea repo itself? I am just curious how anyone else has run this as HA without this problem.
The HA implementation here is "best effort", i.e. Gitea itself doesn't handle HA internally. There is this issue.
What you describe is one of the "issues" of a multi-pod setup if the pods don't communicate with each other.
Sticky sessions help here, yes. I am running the HA setup since some time and have been using sticky sessions from the beginning without thinking about this in particular from the get-go.
We don't know how many users really run in HA, I'd guess most don't. Any contribution/discussion is welcome!