2020-08-23 17:56:55 +00:00
# Default values for gitea.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
2022-06-09 19:21:25 +08:00
## @section Global
#
## @param global.imageRegistry global image registry override
## @param global.imagePullSecrets global image pull secrets override; can be extended by `imagePullSecrets`
## @param global.storageClass global storage class override
2023-02-22 01:53:25 +08:00
## @param global.hostAliases global hostAliases which will be added to the pod's hosts files
2022-06-09 18:55:08 +08:00
global :
imageRegistry : ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets : [ ]
storageClass : ""
2023-02-22 01:53:25 +08:00
hostAliases : [ ]
# - ip: 192.168.137.2
# hostnames:
# - example.com
2019-12-12 13:38:31 -05:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param replicaCount number of replicas for the deployment
2020-08-23 17:56:55 +00:00
replicaCount : 1
2019-12-12 13:38:31 -05:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @section strategy
## @param strategy.type strategy type
## @param strategy.rollingUpdate.maxSurge maxSurge
## @param strategy.rollingUpdate.maxUnavailable maxUnavailable
strategy :
type : "RollingUpdate"
rollingUpdate :
maxSurge : "100%"
maxUnavailable : 0
2022-06-09 19:21:25 +08:00
## @param clusterDomain cluster domain
2020-10-30 01:08:58 +08:00
clusterDomain : cluster.local
2022-06-09 19:21:25 +08:00
## @section Image
## @param image.registry image registry, e.g. gcr.io,docker.io
## @param image.repository Image to start for this pod
2022-07-10 04:43:04 +08:00
## @param image.tag Visit: [Image tag](https://hub.docker.com/r/gitea/gitea/tags?page=1&ordering=last_updated). Defaults to `appVersion` within Chart.yaml.
2023-09-09 15:36:19 +00:00
## @param image.digest Image digest. Allows to pin the given image tag. Useful for having control over mutable tags like `latest`
2022-06-09 19:21:25 +08:00
## @param image.pullPolicy Image pull policy
## @param image.rootless Wether or not to pull the rootless version of Gitea, only works on Gitea 1.14.x or higher
2020-08-23 17:56:55 +00:00
image :
2022-06-09 19:21:25 +08:00
registry : ""
2020-08-23 17:56:55 +00:00
repository : gitea/gitea
2022-03-01 22:55:44 +08:00
# Overrides the image tag whose default is the chart appVersion.
tag : ""
2023-09-09 15:36:19 +00:00
digest : ""
2020-08-23 17:56:55 +00:00
pullPolicy : Always
2023-06-27 20:32:01 +00:00
rootless : true
2019-12-12 13:38:31 -05:00
2022-06-09 19:21:25 +08:00
## @param imagePullSecrets Secret to use for pulling the image
2020-08-23 17:56:55 +00:00
imagePullSecrets : [ ]
2019-12-12 13:38:31 -05:00
2022-06-09 19:21:25 +08:00
## @section Security
2021-12-18 19:10:48 +08:00
# Security context is only usable with rootless image due to image design
2022-06-09 19:21:25 +08:00
## @param podSecurityContext.fsGroup Set the shared file system group for all containers in the pod.
2021-12-18 19:10:48 +08:00
podSecurityContext :
fsGroup : 1000
2022-06-09 19:21:25 +08:00
## @param containerSecurityContext Security context
2021-12-18 19:10:48 +08:00
containerSecurityContext : {}
2021-07-01 23:02:56 +08:00
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# # Add the SYS_CHROOT capability for root and rootless images if you intend to
# # run pods on nodes that use the container runtime cri-o. Otherwise, you will
# # get an error message from the SSH server that it is not possible to read from
# # the repository.
# # https://gitea.com/gitea/helm-chart/issues/161
# add:
# - SYS_CHROOT
# privileged: false
# readOnlyRootFilesystem: true
# runAsGroup: 1000
# runAsNonRoot: true
# runAsUser: 1000
2021-03-01 20:16:49 +08:00
2023-01-10 14:54:55 +08:00
## @deprecated The securityContext variable has been split two:
2022-06-09 19:21:25 +08:00
## - containerSecurityContext
## - podSecurityContext.
## @param securityContext Run init and Gitea containers as a specific securityContext
2021-12-18 19:10:48 +08:00
securityContext : {}
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param podDisruptionBudget Pod disruption budget
podDisruptionBudget : {}
# maxUnavailable: 1
# minAvailable: 1
2022-06-09 19:21:25 +08:00
## @section Service
2019-12-12 13:38:31 -05:00
service :
2022-06-09 19:21:25 +08:00
## @param service.http.type Kubernetes service type for web traffic
## @param service.http.port Port number for web traffic
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param service.http.clusterIP ClusterIP setting for http autosetup for deployment is None
2022-06-09 19:21:25 +08:00
## @param service.http.loadBalancerIP LoadBalancer IP setting
## @param service.http.nodePort NodePort for http service
## @param service.http.externalTrafficPolicy If `service.http.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
## @param service.http.externalIPs External IPs for service
## @param service.http.ipFamilyPolicy HTTP service dual-stack policy
## @param service.http.ipFamilies HTTP service dual-stack familiy selection,for dual-stack parameters see official kubernetes [dual-stack concept documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
## @param service.http.loadBalancerSourceRanges Source range filter for http loadbalancer
## @param service.http.annotations HTTP service annotations
2019-12-12 13:38:31 -05:00
http :
2020-08-23 17:56:55 +00:00
type : ClusterIP
2019-12-12 13:38:31 -05:00
port : 3000
2020-12-16 20:37:47 +08:00
clusterIP : None
2022-06-09 19:21:25 +08:00
loadBalancerIP :
nodePort :
externalTrafficPolicy :
externalIPs :
ipFamilyPolicy :
ipFamilies :
2021-06-08 01:53:01 +08:00
loadBalancerSourceRanges : [ ]
2022-06-09 19:21:25 +08:00
annotations : {}
## @param service.ssh.type Kubernetes service type for ssh traffic
## @param service.ssh.port Port number for ssh traffic
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for deployment is None
2022-06-09 19:21:25 +08:00
## @param service.ssh.loadBalancerIP LoadBalancer IP setting
## @param service.ssh.nodePort NodePort for ssh service
## @param service.ssh.externalTrafficPolicy If `service.ssh.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
## @param service.ssh.externalIPs External IPs for service
## @param service.ssh.ipFamilyPolicy SSH service dual-stack policy
## @param service.ssh.ipFamilies SSH service dual-stack familiy selection,for dual-stack parameters see official kubernetes [dual-stack concept documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
## @param service.ssh.hostPort HostPort for ssh service
## @param service.ssh.loadBalancerSourceRanges Source range filter for ssh loadbalancer
## @param service.ssh.annotations SSH service annotations
2019-12-12 13:38:31 -05:00
ssh :
2020-08-23 17:56:55 +00:00
type : ClusterIP
2019-12-12 13:38:31 -05:00
port : 22
2020-12-16 20:37:47 +08:00
clusterIP : None
2022-06-09 19:21:25 +08:00
loadBalancerIP :
nodePort :
externalTrafficPolicy :
externalIPs :
ipFamilyPolicy :
ipFamilies :
hostPort :
2021-02-05 04:42:42 +08:00
loadBalancerSourceRanges : [ ]
2022-06-09 19:21:25 +08:00
annotations : {}
## @section Ingress
## @param ingress.enabled Enable ingress
## @param ingress.className Ingress class name
## @param ingress.annotations Ingress annotations
## @param ingress.hosts[0].host Default Ingress host
## @param ingress.hosts[0].paths[0].path Default Ingress path
## @param ingress.hosts[0].paths[0].pathType Ingress path type
## @param ingress.tls Ingress tls settings
## @extra ingress.apiVersion Specify APIVersion of ingress object. Mostly would only be used for argocd.
2020-08-23 17:56:55 +00:00
ingress :
2019-12-12 13:38:31 -05:00
enabled : false
2021-09-02 10:53:48 +08:00
# className: nginx
2022-06-09 19:21:25 +08:00
className :
2023-03-29 05:18:23 +08:00
annotations :
{}
2020-08-23 17:56:55 +00:00
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts :
2021-06-25 02:28:45 +08:00
- host : git.example.com
paths :
- path : /
pathType : Prefix
2020-08-23 17:56:55 +00:00
tls : [ ]
# - secretName: chart-example-tls
# hosts:
# - git.example.com
2021-12-20 19:54:37 +08:00
# Mostly for argocd or any other CI that uses `helm template | kubectl apply` or similar
# If helm doesn't correctly detect your ingress API version you can set it here.
# apiVersion: networking.k8s.io/v1
2020-08-23 17:56:55 +00:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @section deployment
2022-06-09 19:21:25 +08:00
#
## @param resources Kubernetes resources
2023-03-29 05:18:23 +08:00
resources :
{}
2020-08-23 17:56:55 +00:00
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
2019-12-12 13:38:31 -05:00
2021-06-07 16:41:16 +08:00
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
2022-06-09 19:21:25 +08:00
## @param schedulerName Use an alternate scheduler, e.g. "stork"
schedulerName : ""
2021-06-07 16:41:16 +08:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param nodeSelector NodeSelector for the deployment
2019-12-12 13:38:31 -05:00
nodeSelector : {}
2020-08-23 17:56:55 +00:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param tolerations Tolerations for the deployment
2019-12-12 13:38:31 -05:00
tolerations : [ ]
2020-08-23 17:56:55 +00:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param affinity Affinity for the deployment
2019-12-12 13:38:31 -05:00
affinity : {}
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param topologySpreadConstraints TopologySpreadConstraints for the deployment
topologySpreadConstraints : [ ]
## @param dnsConfig dnsConfig for the deployment
2022-06-27 14:35:55 +08:00
dnsConfig : {}
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param priorityClassName priorityClassName for the deployment
2023-04-07 18:58:34 +08:00
priorityClassName : ""
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param deployment.env Additional environment variables to pass to containers
## @param deployment.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
## @param deployment.labels Labels for the deployment
## @param deployment.annotations Annotations for the Gitea deployment to be created
deployment :
2023-03-29 05:18:23 +08:00
env :
[ ]
2020-09-10 10:15:34 +00:00
# - name: VARIABLE
# value: my-value
2020-08-23 17:56:55 +00:00
terminationGracePeriodSeconds : 60
2021-03-17 08:07:42 +08:00
labels : {}
2022-04-21 23:55:53 +08:00
annotations : {}
2020-08-23 17:56:55 +00:00
2023-05-31 08:47:58 +00:00
## @section ServiceAccount
## @param serviceAccount.create Enable the creation of a ServiceAccount
## @param serviceAccount.name Name of the created ServiceAccount, defaults to release name. Can also link to an externally provided ServiceAccount that should be used.
## @param serviceAccount.automountServiceAccountToken Enable/disable auto mounting of the service account token
## @param serviceAccount.imagePullSecrets Image pull secrets, available to the ServiceAccount
## @param serviceAccount.annotations Custom annotations for the ServiceAccount
## @param serviceAccount.labels Custom labels for the ServiceAccount
serviceAccount :
create : false
name : ""
automountServiceAccountToken : false
imagePullSecrets : [ ]
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
# - name: private-registry-access
2023-05-31 08:47:58 +00:00
annotations : {}
labels : {}
2022-06-09 19:21:25 +08:00
## @section Persistence
#
## @param persistence.enabled Enable persistent storage
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param persistence.create Whether to create the persistentVolumeClaim for shared storage
## @param persistence.mount Whether the persistentVolumeClaim should be mounted (even if not created)
## @param persistence.claimName Use an existing claim to store repository information
2022-06-09 19:21:25 +08:00
## @param persistence.size Size for persistence to store repo information
## @param persistence.accessModes AccessMode for persistence
## @param persistence.labels Labels for the persistence volume claim to be created
2023-07-19 15:16:45 +00:00
## @param persistence.annotations.helm.sh/resource-policy Resource policy for the persistence volume claim
2022-06-09 19:21:25 +08:00
## @param persistence.storageClass Name of the storage class to use
## @param persistence.subPath Subdirectory of the volume to mount at
2023-07-19 12:37:53 +00:00
## @param persistence.volumeName Name of persistent volume in PVC
2020-08-23 17:56:55 +00:00
persistence :
enabled : true
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
create : true
mount : true
claimName : gitea-shared-storage
2020-08-23 17:56:55 +00:00
size : 10Gi
accessModes :
- ReadWriteOnce
2021-01-22 16:24:37 +08:00
labels : {}
2022-06-09 19:21:25 +08:00
storageClass :
subPath :
2023-07-19 12:37:53 +00:00
volumeName : ""
2023-07-19 15:16:45 +00:00
annotations :
helm.sh/resource-policy : keep
2020-08-23 17:56:55 +00:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param extraVolumes Additional volumes to mount to the Gitea deployment
2022-08-08 03:32:19 +08:00
extraVolumes : [ ]
2021-01-20 19:28:39 +08:00
# - name: postgres-ssl-vol
# secret:
# secretName: gitea-postgres-ssl
2022-08-08 03:32:19 +08:00
## @param extraContainerVolumeMounts Mounts that are only mapped into the Gitea runtime/main container, to e.g. override custom templates.
extraContainerVolumeMounts : [ ]
2021-01-20 19:28:39 +08:00
2022-08-08 03:32:19 +08:00
## @param extraInitVolumeMounts Mounts that are only mapped into the init-containers. Can be used for additional preconfiguration.
extraInitVolumeMounts : [ ]
2023-01-10 14:54:55 +08:00
## @deprecated The extraVolumeMounts variable has been split two:
2022-08-08 03:32:19 +08:00
## - extraContainerVolumeMounts
## - extraInitVolumeMounts
## As an example, can be used to mount a client cert when connecting to an external Postgres server.
## @param extraVolumeMounts **DEPRECATED** Additional volume mounts for init containers and the Gitea main container
extraVolumeMounts : [ ]
2021-01-20 19:28:39 +08:00
# - name: postgres-ssl-vol
# readOnly: true
# mountPath: "/pg-ssl"
2022-06-09 19:21:25 +08:00
## @section Init
## @param initPreScript Bash shell script copied verbatim to the start of the init-container.
2021-01-20 19:28:39 +08:00
initPreScript : ""
#
# initPreScript: |
# mkdir -p /data/git/.postgresql
# cp /pg-ssl/* /data/git/.postgresql/
# chown -R git:git /data/git/.postgresql/
# chmod 400 /data/git/.postgresql/postgresql.key
2023-04-01 19:35:11 +08:00
## @param initContainers.resources.limits initContainers.limits Kubernetes resource limits for init containers
## @param initContainers.resources.requests.cpu initContainers.requests.cpu Kubernetes cpu resource limits for init containers
## @param initContainers.resources.requests.memory initContainers.requests.memory Kubernetes memory resource limits for init containers
initContainers :
resources :
limits : {}
requests :
cpu : 100m
memory : 128Mi
2021-06-30 03:23:32 +08:00
# Configure commit/action signing prerequisites
2022-06-09 19:21:25 +08:00
## @section Signing
#
## @param signing.enabled Enable commit/action signing
## @param signing.gpgHome GPG home directory
2023-01-18 00:58:10 +08:00
## @param signing.privateKey Inline private gpg key for signed Gitea actions
## @param signing.existingSecret Use an existing secret to store the value of `signing.privateKey`
2021-06-30 03:23:32 +08:00
signing :
enabled : false
gpgHome : /data/git/.gnupg
2023-01-18 00:58:10 +08:00
privateKey : ""
# privateKey: |-
# -----BEGIN PGP PRIVATE KEY BLOCK-----
# ...
# -----END PGP PRIVATE KEY BLOCK-----
existingSecret : ""
2021-01-20 19:28:39 +08:00
2022-06-09 19:21:25 +08:00
## @section Gitea
#
2020-08-23 17:56:55 +00:00
gitea :
2022-06-09 19:21:25 +08:00
## @param gitea.admin.username Username for the Gitea admin user
## @param gitea.admin.existingSecret Use an existing secret to store admin user credentials
## @param gitea.admin.password Password for the Gitea admin user
## @param gitea.admin.email Email for the Gitea admin user
2020-08-23 17:56:55 +00:00
admin :
2023-03-29 05:18:23 +08:00
# existingSecret: gitea-admin-secret
2022-06-09 19:21:25 +08:00
existingSecret :
2020-08-23 17:56:55 +00:00
username : gitea_admin
password : r8sA8CPHD9!bt6d
email : "gitea@local.domain"
2022-06-09 19:21:25 +08:00
## @param gitea.metrics.enabled Enable Gitea metrics
## @param gitea.metrics.serviceMonitor.enabled Enable Gitea metrics service monitor
2021-01-21 23:45:26 +08:00
metrics :
enabled : false
serviceMonitor :
enabled : false
2021-06-07 22:28:28 +08:00
# additionalLabels:
# prometheus-release: prom1
2021-01-21 23:45:26 +08:00
2022-06-09 19:21:25 +08:00
## @param gitea.ldap LDAP configuration
2023-03-29 05:18:23 +08:00
ldap :
[ ]
2021-10-08 20:16:24 +08:00
# - name: "LDAP 1"
# existingSecret:
# securityProtocol:
# host:
# port:
# userSearchBase:
# userFilter:
# adminFilter:
# emailAttribute:
# bindDn:
# bindPassword:
# usernameAttribute:
# publicSSHKeyAttribute:
2020-08-23 17:56:55 +00:00
2021-12-20 22:43:55 +08:00
# Either specify inline `key` and `secret` or refer to them via `existingSecret`
2022-06-09 19:21:25 +08:00
## @param gitea.oauth OAuth configuration
2023-03-29 05:18:23 +08:00
oauth :
[ ]
2021-12-20 22:43:55 +08:00
# - name: 'OAuth 1'
# provider:
# key:
# secret:
# existingSecret:
# autoDiscoverUrl:
# useCustomUrls:
# customAuthUrl:
# customTokenUrl:
# customProfileUrl:
# customEmailUrl:
2021-03-01 20:24:11 +08:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param gitea.config.server.SSH_PORT SSH port for rootlful Gitea image
## @param gitea.config.server.SSH_LISTEN_PORT SSH port for rootless Gitea image
config :
# APP_NAME: "Gitea: Git with a cup of tea"
# RUN_MODE: dev
server :
SSH_PORT : 22 # rootful image
SSH_LISTEN_PORT : 2222 # rootless image
2020-08-23 17:56:55 +00:00
#
# security:
# PASSWORD_COMPLEXITY: spec
2022-06-09 19:21:25 +08:00
## @param gitea.additionalConfigSources Additional configuration from secret or configmap
2021-12-22 18:44:04 +08:00
additionalConfigSources : [ ]
# - secret:
# secretName: gitea-app-ini-oauth
# - configMap:
# name: gitea-app-ini-plaintext
2022-06-09 19:21:25 +08:00
## @param gitea.additionalConfigFromEnvs Additional configuration sources from environment variables
2022-03-09 14:47:55 +08:00
additionalConfigFromEnvs : [ ]
2022-06-09 19:21:25 +08:00
## @param gitea.podAnnotations Annotations for the Gitea pod
2020-09-24 16:32:11 +00:00
podAnnotations : {}
2023-03-22 16:13:31 +08:00
## @param gitea.ssh.logLevel Configure OpenSSH's log level. Only available for root-based Gitea image.
ssh :
logLevel : "INFO"
2022-06-09 19:21:25 +08:00
## @section LivenessProbe
#
## @param gitea.livenessProbe.enabled Enable liveness probe
## @param gitea.livenessProbe.tcpSocket.port Port to probe for liveness
## @param gitea.livenessProbe.initialDelaySeconds Initial delay before liveness probe is initiated
## @param gitea.livenessProbe.timeoutSeconds Timeout for liveness probe
## @param gitea.livenessProbe.periodSeconds Period for liveness probe
## @param gitea.livenessProbe.successThreshold Success threshold for liveness probe
## @param gitea.livenessProbe.failureThreshold Failure threshold for liveness probe
2021-12-13 16:50:08 +08:00
# Modify the liveness probe for your needs or completely disable it by commenting out.
2021-03-01 22:46:05 +08:00
livenessProbe :
2022-06-09 19:21:25 +08:00
enabled : true
2021-12-13 16:50:08 +08:00
tcpSocket :
port : http
2021-03-01 22:46:05 +08:00
initialDelaySeconds : 200
timeoutSeconds : 1
periodSeconds : 10
successThreshold : 1
failureThreshold : 10
2021-12-13 16:50:08 +08:00
2022-06-09 19:21:25 +08:00
## @section ReadinessProbe
#
## @param gitea.readinessProbe.enabled Enable readiness probe
## @param gitea.readinessProbe.tcpSocket.port Port to probe for readiness
## @param gitea.readinessProbe.initialDelaySeconds Initial delay before readiness probe is initiated
## @param gitea.readinessProbe.timeoutSeconds Timeout for readiness probe
## @param gitea.readinessProbe.periodSeconds Period for readiness probe
## @param gitea.readinessProbe.successThreshold Success threshold for readiness probe
## @param gitea.readinessProbe.failureThreshold Failure threshold for readiness probe
2021-12-13 16:50:08 +08:00
# Modify the readiness probe for your needs or completely disable it by commenting out.
2021-03-01 22:46:05 +08:00
readinessProbe :
2022-06-09 19:21:25 +08:00
enabled : true
2021-12-13 16:50:08 +08:00
tcpSocket :
port : http
2021-03-01 22:46:05 +08:00
initialDelaySeconds : 5
timeoutSeconds : 1
periodSeconds : 10
successThreshold : 1
failureThreshold : 3
2021-12-13 16:50:08 +08:00
# # Uncomment the startup probe to enable and modify it for your needs.
2022-06-09 19:21:25 +08:00
## @section StartupProbe
#
## @param gitea.startupProbe.enabled Enable startup probe
## @param gitea.startupProbe.tcpSocket.port Port to probe for startup
## @param gitea.startupProbe.initialDelaySeconds Initial delay before startup probe is initiated
## @param gitea.startupProbe.timeoutSeconds Timeout for startup probe
## @param gitea.startupProbe.periodSeconds Period for startup probe
## @param gitea.startupProbe.successThreshold Success threshold for startup probe
## @param gitea.startupProbe.failureThreshold Failure threshold for startup probe
startupProbe :
enabled : false
tcpSocket :
port : http
initialDelaySeconds : 60
timeoutSeconds : 1
periodSeconds : 10
successThreshold : 1
failureThreshold : 10
2021-03-01 22:46:05 +08:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @section redis-cluster
## @param redis-cluster.enabled Enable redis
2023-07-19 15:16:45 +00:00
## @param redis-cluster.usePassword Whether to use password authentication
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
redis-cluster :
enabled : true
2023-07-19 15:16:45 +00:00
usePassword : false
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @section postgresql-ha
2022-06-09 19:21:25 +08:00
#
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param postgresql-ha.enabled Enable postgresql-ha
2023-07-19 15:16:45 +00:00
## @param postgresql-ha.postgresql.password Password for the `gitea` user (overrides `auth.password`)
2023-07-18 20:06:16 +02:00
## @param postgresql-ha.global.postgresql.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql-ha.global.postgresql.username Name for a custom user to create (overrides `auth.username`)
2023-07-19 23:05:52 +02:00
## @param postgresql-ha.global.postgresql.password Name for a custom password to create (overrides `auth.password`)
2023-07-19 15:16:45 +00:00
## @param postgresql-ha.postgresql.repmgrPassword Repmgr Password
## @param postgresql-ha.postgresql.postgresPassword postgres Password
## @param postgresql-ha.pgpool.adminPassword pgpool adminPassword
2023-07-18 20:06:16 +02:00
## @param postgresql-ha.service.ports.postgresql postgresql service port (overrides `service.ports.postgresql`)
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
## @param postgresql-ha.primary.persistence.size PVC Storage Request for postgresql-ha volume
postgresql-ha :
global :
2023-07-18 18:31:58 +02:00
postgresql :
2023-07-18 19:07:33 +02:00
database : gitea
2023-07-19 23:05:52 +02:00
password : gitea
2023-07-18 19:07:33 +02:00
username : gitea
2023-07-19 15:16:45 +00:00
enabled : true
postgresql :
repmgrPassword : changeme2
postgresPassword : changeme1
password : changeme4
pgpool :
adminPassword : changeme3
2023-07-18 19:07:33 +02:00
service :
ports :
postgresql : 5432
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
primary :
persistence :
size : 10Gi
2020-08-23 17:56:55 +00:00
2022-06-09 19:21:25 +08:00
## @section PostgreSQL
#
## @param postgresql.enabled Enable PostgreSQL
2023-05-02 21:32:54 +08:00
## @param postgresql.global.postgresql.auth.password Password for the `gitea` user (overrides `auth.password`)
2023-03-28 01:12:29 +08:00
## @param postgresql.global.postgresql.auth.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql.global.postgresql.auth.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume
2020-08-23 17:56:55 +00:00
postgresql :
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
enabled : false
2020-08-23 17:56:55 +00:00
global :
postgresql :
2023-03-28 01:12:29 +08:00
auth :
password : gitea
database : gitea
username : gitea
service :
ports :
postgresql : 5432
primary :
persistence :
size : 10Gi
2020-08-23 17:56:55 +00:00
2021-12-23 00:25:32 +08:00
# By default, removed or moved settings that still remain in a user defined values.yaml will cause Helm to fail running the install/update.
# Set it to false to skip this basic validation check.
2022-06-09 19:21:25 +08:00
## @section Advanced
## @param checkDeprecation Set it to false to skip this basic validation check.
2023-03-09 23:25:45 +08:00
## @param test.enabled Set it to false to disable test-connection Pod.
## @param test.image.name Image name for the wget container used in the test-connection Pod.
## @param test.image.tag Image tag for the wget container used in the test-connection Pod.
2021-12-23 00:25:32 +08:00
checkDeprecation : true
2023-03-09 23:25:45 +08:00
test :
enabled : true
image :
name : busybox
tag : latest
2023-05-02 21:32:54 +08:00
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy : [ ]