Bad Request: invalid CSRF token on HA Deployment #661

Open
opened 2024-05-28 21:16:45 +00:00 by EStork09 · 5 comments
EStork09 commented 2024-05-28 21:16:45 +00:00 (Migrated from gitea.com)

I am not sure if this is a problem with the helm chart, but no one else seems to be saying anything about it. So I assume it is a config issue.

This only seems to occur when doing something in the /admin path. I can create repos, and work in that regard, but if I do anything under /admin like make an organization, it throws the above error.

values.yaml:

gitea:
  replicaCount: 3

  ingress:
    enabled: true
    annotations: 
      cert-manager.io/cluster-issuer: le-cf-issuer
    ## See https://kubernetes.io/docs/concepts/services-networking/ingress/#ingressclass-scope
    ingressClassName: nginx
    hosts:
      - host: gitea.example.com
        paths:
          - path: /
            pathType: Prefix
    tls: 
      - secretName: gitea-tls
        hosts:
          - gitea.example.com

  deployment:
    env:
      []
      # - name: VARIABLE
      #   value: my-value

  persistence:
    enabled: false
    create: false
    mount: false
    accessModes: 
      - ReadWriteMany
  
  gitea:
    ## @param gitea.admin.username Username for the gitea admin user
    ## @param gitea.admin.existingSecret Use an existing secret to store admin user credentials
    ## @param gitea.admin.password Password for the gitea admin user
    ## @param gitea.admin.email Email for the gitea admin user
    admin:
      existingSecret: gitea-admin-secret
      username: ''
      password: ''
      email: example@live.com

    ## @param gitea.metrics.enabled Enable gitea metrics
    ## @param gitea.metrics.serviceMonitor.enabled Enable gitea metrics service monitor
    metrics:
      enabled: true
      serviceMonitor:
        enabled: true

    oauth:
      - name: 'Keycloak'
        provider: 'openidConnect'
        existingSecret: gitea-oauth-secret
        autoDiscoverUrl: 'https://keycloak.example.com/realms/production/.well-known/openid-configuration'

    config:
      ## @param gitea.config.APP_NAME Application name, used in the page title
      # APP_NAME: 'gitea: Beyond coding. We forge.'

      ## @param gitea.config.RUN_MODE Application run mode, affects performance and debugging: `dev` or `prod`
      RUN_MODE: prod

      ## @param gitea.config.server [object] General server settings
      server:
        PROTOCOL: http
        DOMAIN: gitea.example.com
        ROOT_URL: https://gitea.example.com/

      # service:
      #   ENABLE_REVERSE_PROXY_AUTHENTICATION: true
      #   ENABLE_REVERSE_PROXY_AUTHENTICATION_API: true
      #   ENABLE_REVERSE_PROXY_AUTO_REGISTRATION: true
      #   ENABLE_REVERSE_PROXY_EMAIL: true
      #   ENABLE_REVERSE_PROXY_FULL_NAME: true
    
      ## @param gitea.config.database Database configuration (only necessary with an [externally managed DB](https://codeberg.org/gitea-contrib/gitea-helm#external-database)).
      database: 
        DB_TYPE: postgres
      
      queue:
        TYPE: redis
        CONN_STR: redis+cluster://gitea-redis-cluster-leader:6379/0

      oauth2_client:
        OPENID_CONNECT_SCOPES: profile,email,groups
        ENABLE_AUTO_REGISTRATION: true
        # USERNAME: email

      ## @param gitea.config.cache Cache configuration
      cache:
        ADAPTER: redis-cluster
        HOST: redis+cluster://gitea-redis-cluster-leader:6379/0?pool_size=100&idle_timeout=180s
      
      cron:
        GIT_GC_REPOS:
          ENABLED: false

      session:
        # PROVIDER: redis-cluster
        # PROVIDER_CONFIG: redis+cluster://gitea-redis-cluster-leader:6379/0
        PROVIDER: db
        COOKIE_SECURE: true
        # DOMAIN: gitea.example.com
        # SAME_SITE: none

      security:
        CSRF_COOKIE_HTTP_ONLY: false
    
      ## @param gitea.config.storage General storage settings
      storage: 
        STORAGE_TYPE: minio
        MINIO_ENDPOINT: minio.minio.svc.cluster.local
        MINIO_ACCESS_KEY_ID: gitea
        MINIO_BUCKET: gitea
        MINIO_LOCATION: minio
        MINIO_USE_SSL: true
        MINIO_INSECURE_SKIP_VERIFY: true
    
    additionalConfigFromEnvs:
      - name: GITEA__DATABASE__NAME
        valueFrom:
          secretKeyRef:
            name: gitea-pg-app
            key: dbname
      - name: GITEA__DATABASE__HOST
        valueFrom:
          secretKeyRef:
            name: gitea-pg-app
            key: host
      - name: GITEA__DATABASE__USER
        valueFrom:
          secretKeyRef:
            name: gitea-pg-app
            key: user
      - name: GITEA__DATABASE__PASSWD
        valueFrom:
          secretKeyRef:
            name: gitea-pg-app
            key: password
      - name: GITEA__STORAGE__MINIO_SECRET_ACCESS_KEY
        valueFrom:
          secretKeyRef:
            name: gitea-minio-creds
            key: MINIO_SECRET_ACCESS_KEY
  redis-cluster:
    enabled: false
  
  postgresql-ha:
    enabled: false
  
  
  
redis-cluster:
  redisCluster:
    name: gitea-redis-cluster
  serviceMonitor:
    enabled: true
    interval: 30s
    scrapeTimeout: 10s
    namespace: gitea
  redisExporter:
    enabled: true
  storageSpec:
    volumeClaimTemplate:
      spec:
        storageClassName: ceph-block
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 5Gi

chart.yaml

apiVersion: v2
name: gitea
version: 1.0.0
dependencies:
  - name: gitea
    version: 10.1.4
    repository: https://dl.gitea.com/charts/
  - name: redis-cluster
    version: 0.16.0
    repository: https://ot-container-kit.github.io/helm-charts
I am not sure if this is a problem with the helm chart, but no one else seems to be saying anything about it. So I assume it is a config issue. This only seems to occur when doing something in the `/admin` path. I can create repos, and work in that regard, but if I do anything under `/admin` like make an organization, it throws the above error. values.yaml: ``` gitea: replicaCount: 3 ingress: enabled: true annotations: cert-manager.io/cluster-issuer: le-cf-issuer ## See https://kubernetes.io/docs/concepts/services-networking/ingress/#ingressclass-scope ingressClassName: nginx hosts: - host: gitea.example.com paths: - path: / pathType: Prefix tls: - secretName: gitea-tls hosts: - gitea.example.com deployment: env: [] # - name: VARIABLE # value: my-value persistence: enabled: false create: false mount: false accessModes: - ReadWriteMany gitea: ## @param gitea.admin.username Username for the gitea admin user ## @param gitea.admin.existingSecret Use an existing secret to store admin user credentials ## @param gitea.admin.password Password for the gitea admin user ## @param gitea.admin.email Email for the gitea admin user admin: existingSecret: gitea-admin-secret username: '' password: '' email: example@live.com ## @param gitea.metrics.enabled Enable gitea metrics ## @param gitea.metrics.serviceMonitor.enabled Enable gitea metrics service monitor metrics: enabled: true serviceMonitor: enabled: true oauth: - name: 'Keycloak' provider: 'openidConnect' existingSecret: gitea-oauth-secret autoDiscoverUrl: 'https://keycloak.example.com/realms/production/.well-known/openid-configuration' config: ## @param gitea.config.APP_NAME Application name, used in the page title # APP_NAME: 'gitea: Beyond coding. We forge.' ## @param gitea.config.RUN_MODE Application run mode, affects performance and debugging: `dev` or `prod` RUN_MODE: prod ## @param gitea.config.server [object] General server settings server: PROTOCOL: http DOMAIN: gitea.example.com ROOT_URL: https://gitea.example.com/ # service: # ENABLE_REVERSE_PROXY_AUTHENTICATION: true # ENABLE_REVERSE_PROXY_AUTHENTICATION_API: true # ENABLE_REVERSE_PROXY_AUTO_REGISTRATION: true # ENABLE_REVERSE_PROXY_EMAIL: true # ENABLE_REVERSE_PROXY_FULL_NAME: true ## @param gitea.config.database Database configuration (only necessary with an [externally managed DB](https://codeberg.org/gitea-contrib/gitea-helm#external-database)). database: DB_TYPE: postgres queue: TYPE: redis CONN_STR: redis+cluster://gitea-redis-cluster-leader:6379/0 oauth2_client: OPENID_CONNECT_SCOPES: profile,email,groups ENABLE_AUTO_REGISTRATION: true # USERNAME: email ## @param gitea.config.cache Cache configuration cache: ADAPTER: redis-cluster HOST: redis+cluster://gitea-redis-cluster-leader:6379/0?pool_size=100&idle_timeout=180s cron: GIT_GC_REPOS: ENABLED: false session: # PROVIDER: redis-cluster # PROVIDER_CONFIG: redis+cluster://gitea-redis-cluster-leader:6379/0 PROVIDER: db COOKIE_SECURE: true # DOMAIN: gitea.example.com # SAME_SITE: none security: CSRF_COOKIE_HTTP_ONLY: false ## @param gitea.config.storage General storage settings storage: STORAGE_TYPE: minio MINIO_ENDPOINT: minio.minio.svc.cluster.local MINIO_ACCESS_KEY_ID: gitea MINIO_BUCKET: gitea MINIO_LOCATION: minio MINIO_USE_SSL: true MINIO_INSECURE_SKIP_VERIFY: true additionalConfigFromEnvs: - name: GITEA__DATABASE__NAME valueFrom: secretKeyRef: name: gitea-pg-app key: dbname - name: GITEA__DATABASE__HOST valueFrom: secretKeyRef: name: gitea-pg-app key: host - name: GITEA__DATABASE__USER valueFrom: secretKeyRef: name: gitea-pg-app key: user - name: GITEA__DATABASE__PASSWD valueFrom: secretKeyRef: name: gitea-pg-app key: password - name: GITEA__STORAGE__MINIO_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: gitea-minio-creds key: MINIO_SECRET_ACCESS_KEY redis-cluster: enabled: false postgresql-ha: enabled: false redis-cluster: redisCluster: name: gitea-redis-cluster serviceMonitor: enabled: true interval: 30s scrapeTimeout: 10s namespace: gitea redisExporter: enabled: true storageSpec: volumeClaimTemplate: spec: storageClassName: ceph-block accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi ``` chart.yaml ``` apiVersion: v2 name: gitea version: 1.0.0 dependencies: - name: gitea version: 10.1.4 repository: https://dl.gitea.com/charts/ - name: redis-cluster version: 0.16.0 repository: https://ot-container-kit.github.io/helm-charts ```
justusbunsi commented 2024-05-28 21:59:00 +00:00 (Migrated from gitea.com)

Looks like you are using Gitea Helm Chart as a Subchart in a custom Wrapper Chart. Right?
If that's the case, can you check if this still occurs when using Gitea Helm Chart natively?
And, just to be sure: You are using a Gitea image, not Forgejo image. Right? The commented gitea.config.APP_NAME got me suspicious. We cannot guarantee a working Helm Chart with Forgejo images.

Looks like you are using Gitea Helm Chart as a Subchart in a custom Wrapper Chart. Right? If that's the case, can you check if this still occurs when using Gitea Helm Chart natively? And, just to be sure: You are using a Gitea image, not Forgejo image. Right? The commented `gitea.config.APP_NAME` got me suspicious. We cannot guarantee a working Helm Chart with Forgejo images.
EStork09 commented 2024-05-29 00:14:13 +00:00 (Migrated from gitea.com)

The sub chart shouldn’t impact anything, the values just get an indent, but the chart shouldn’t be any wiser. Switching from a sub chart is a little difficult since this is running from an argocd app, I can try but running it outside that, I’m not sure what exactly would change that would cause a csrf token error since everything else seems to work.

The Forgejo comments were me trying it to see if the issue was limited to one or both. I have both gitea and forgejo running but independently just to test them.

But yes, all gitea images, no Forgejo images. Otherwise it’d be in the values.yaml above.

The sub chart shouldn’t impact anything, the values just get an indent, but the chart shouldn’t be any wiser. Switching from a sub chart is a little difficult since this is running from an argocd app, I can try but running it outside that, I’m not sure what exactly would change that would cause a csrf token error since everything else seems to work. The Forgejo comments were me trying it to see if the issue was limited to one or both. I have both gitea and forgejo running but independently just to test them. But yes, all gitea images, no Forgejo images. Otherwise it’d be in the values.yaml above.
justusbunsi commented 2024-05-29 04:23:41 +00:00 (Migrated from gitea.com)

I'll try to reproduce the error as well. Thanks for your reply.

I'll try to reproduce the error as well. Thanks for your reply.
EStork09 commented 2024-06-01 13:52:35 +00:00 (Migrated from gitea.com)

So, I was playing around more, I see the session token getting put into the DB, but it seems only one pod will actually care about that? I added:

  ingress:
    enabled: true
    annotations: 
      cert-manager.io/cluster-issuer: le-cf-issuer
      nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
      nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
      nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
      nginx.ingress.kubernetes.io/affinity: "cookie"
      nginx.ingress.kubernetes.io/session-cookie-name: "sticky-cookie"
      nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
      nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
      nginx.ingress.kubernetes.io/affinity-mode: persistent
      nginx.ingress.kubernetes.io/session-cookie-hash: sha1

To do sticky sessions, it works, to get around this, but I feel like there is something sort of wrong related to the CSRF token validation if it has to come from the same pod that generated it. Maybe I am missing something though. Should this issue switch over to the gitea repo itself? I am just curious how anyone else has run this as HA without this problem.

So, I was playing around more, I see the session token getting put into the DB, but it seems only one pod will actually care about that? I added: ``` ingress: enabled: true annotations: cert-manager.io/cluster-issuer: le-cf-issuer nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "sticky-cookie" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/session-cookie-hash: sha1 ``` To do sticky sessions, it works, to get around this, but I feel like there is something sort of wrong related to the CSRF token validation if it has to come from the same pod that generated it. Maybe I am missing something though. Should this issue switch over to the gitea repo itself? I am just curious how anyone else has run this as HA without this problem.
pat-s commented 2024-06-03 08:22:50 +00:00 (Migrated from gitea.com)

The HA implementation here is "best effort", i.e. Gitea itself doesn't handle HA internally. There is this issue.
What you describe is one of the "issues" of a multi-pod setup if the pods don't communicate with each other.

Sticky sessions help here, yes. I am running the HA setup since some time and have been using sticky sessions from the beginning without thinking about this in particular from the get-go.

We don't know how many users really run in HA, I'd guess most don't. Any contribution/discussion is welcome!

The HA implementation here is "best effort", i.e. Gitea itself doesn't handle HA internally. There is [this issue](https://github.com/go-gitea/gitea/issues/13791). What you describe is one of the "issues" of a multi-pod setup if the pods don't communicate with each other. Sticky sessions help here, yes. I am running the HA setup since some time and have been using sticky sessions from the beginning without thinking about this in particular from the get-go. We don't know how many users really run in HA, I'd guess most don't. Any contribution/discussion is welcome!
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: lunny/helm-chart#661
No description provided.