2019-12-12 13:38:31 -05:00
{ { / * vim : set filetype = mustache : * / } }
{ { / *
Expand the name of the chart.
*/}}
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
2020-08-23 17:56:55 +00:00
{ { - define "gitea.name" - } }
2019-12-12 13:38:31 -05:00
{ { - default . Chart . Name . Values . nameOverride | trunc 6 3 | trimSuffix "-" - } }
{ { - end - } }
{ { / *
Create a default fully qualified app name.
2020-08-23 17:56:55 +00:00
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
2019-12-12 13:38:31 -05:00
*/}}
2020-08-23 17:56:55 +00:00
{ { - define "gitea.fullname" - } }
{ { - if . Values . fullnameOverride - } }
{ { - . Values . fullnameOverride | trunc 6 3 | trimSuffix "-" - } }
{ { - else - } }
2019-12-12 13:38:31 -05:00
{ { - $name : = default . Chart . Name . Values . nameOverride - } }
2020-08-23 17:56:55 +00:00
{ { - if contains $name . Release . Name - } }
{ { - . Release . Name | trunc 6 3 | trimSuffix "-" - } }
{ { - else - } }
{ { - printf "%s-%s" . Release . Name $name | trunc 6 3 | trimSuffix "-" - } }
{ { - end - } }
{ { - end - } }
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { / *
Create a default worker name.
*/}}
{ { - define "gitea.workername" - } }
{ { - printf "%s-%s" . global . Release . Name . worker | trunc 6 3 | trimSuffix "-" - } }
{ { - end - } }
2020-08-23 17:56:55 +00:00
{ { / *
Create chart name and version as used by the chart label.
*/}}
{ { - define "gitea.chart" - } }
{ { - printf "%s-%s" . Chart . Name . Chart . Version | replace "+" "_" | trunc 6 3 | trimSuffix "-" - } }
{ { - end - } }
2021-04-29 17:12:48 +08:00
{ { / *
Create image name and tag used by the deployment.
*/}}
{ { - define "gitea.image" - } }
2023-11-14 21:42:26 +00:00
{ { - $fullOverride : = . Values . image . fullOverride | default "" - } }
2022-06-09 18:55:08 +08:00
{ { - $registry : = . Values . global . imageRegistry | default . Values . image . registry - } }
2023-09-09 15:36:19 +00:00
{ { - $repository : = . Values . image . repository - } }
{ { - $separator : = ":" - } }
2024-05-02 08:05:26 +00:00
{ { - $tag : = . Values . image . tag | default . Chart . AppVersion | toString - } }
2021-04-29 17:12:48 +08:00
{ { - $rootless : = ternary "-rootless" "" ( . Values . image . rootless ) - } }
2023-09-09 15:36:19 +00:00
{ { - $digest : = "" - } }
{ { - if . Values . image . digest } }
{ { - $digest = ( printf "@%s" ( . Values . image . digest | toString ) ) - } }
{ { - end - } }
2023-11-14 21:42:26 +00:00
{ { - if $fullOverride } }
{ { - printf "%s" $fullOverride - } }
{ { - else if $registry } }
2023-09-09 15:36:19 +00:00
{ { - printf "%s/%s%s%s%s%s" $registry $repository $separator $tag $rootless $digest - } }
2022-06-09 18:55:08 +08:00
{ { - else - } }
2023-09-09 15:36:19 +00:00
{ { - printf "%s%s%s%s%s" $repository $separator $tag $rootless $digest - } }
2022-06-09 18:55:08 +08:00
{ { - end - } }
{ { - end - } }
{ { / *
Docker Image Registry Secret Names evaluating values as templates
*/}}
{ { - define "gitea.images.pullSecrets" - } }
{ { - $pullSecrets : = . Values . imagePullSecrets - } }
{ { - range . Values . global . imagePullSecrets - } }
{ { - $pullSecrets = append $pullSecrets ( dict "name" . ) - } }
{ { - end - } }
{ { - if ( not ( empty $pullSecrets ) ) } }
imagePullSecrets:
{ { toYaml $pullSecrets } }
{ { - end } }
{ { - end - } }
{ { / *
Storage Class
*/}}
{ { - define "gitea.persistence.storageClass" - } }
2024-06-06 20:39:41 +00:00
{ { - $storageClass : = ( tpl ( default "" . Values . persistence . storageClass ) . ) | default ( tpl ( default "" . Values . global . storageClass ) . ) } }
2022-06-09 18:55:08 +08:00
{ { - if $storageClass } }
storageClassName: { { $storageClass | quote } }
{ { - end } }
2021-04-29 17:12:48 +08:00
{ { - end - } }
2020-08-23 17:56:55 +00:00
{ { / *
Common labels
*/}}
{ { - define "gitea.labels" - } }
helm.sh/chart: { { include "gitea.chart" . } }
2021-03-01 20:20:55 +08:00
app: { { include "gitea.name" . } }
2020-08-23 17:56:55 +00:00
{ { include "gitea.selectorLabels" . } }
2022-03-01 22:55:44 +08:00
app.kubernetes.io/version: { { . Values . image . tag | default . Chart . AppVersion | quote } }
version: { { . Values . image . tag | default . Chart . AppVersion | quote } }
2020-08-23 17:56:55 +00:00
app.kubernetes.io/managed-by: { { . Release . Service } }
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { - define "gitea.labels.actRunner" - } }
helm.sh/chart: { { include "gitea.chart" . } }
app: { { include "gitea.name" . } } -act-runner
{ { include "gitea.selectorLabels.actRunner" . } }
app.kubernetes.io/version: { { . Values . image . tag | default . Chart . AppVersion | quote } }
version: { { . Values . image . tag | default . Chart . AppVersion | quote } }
app.kubernetes.io/managed-by: { { . Release . Service } }
{ { - end - } }
2020-08-23 17:56:55 +00:00
{ { / *
Selector labels
*/}}
{ { - define "gitea.selectorLabels" - } }
app.kubernetes.io/name: { { include "gitea.name" . } }
app.kubernetes.io/instance: { { . Release . Name } }
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { - define "gitea.selectorLabels.actRunner" - } }
app.kubernetes.io/name: { { include "gitea.name" . } } -act-runner
app.kubernetes.io/instance: { { . Release . Name } }
{ { - end - } }
2023-07-22 11:46:44 +00:00
{ { - define "postgresql-ha.dns" - } }
{ { - if ( index . Values "postgresql-ha" ) . enabled - } }
2023-10-14 16:05:59 +00:00
{ { - printf "%s-postgresql-ha-pgpool.%s.svc.%s:%g" . Release . Name . Release . Namespace . Values . clusterDomain ( index . Values "postgresql-ha" "service" "ports" "postgresql" ) - } }
2020-09-28 23:26:06 +00:00
{ { - end - } }
2023-07-22 11:46:44 +00:00
{ { - end - } }
{ { - define "postgresql.dns" - } }
{ { - if ( index . Values "postgresql" ) . enabled - } }
{ { - printf "%s-postgresql.%s.svc.%s:%g" . Release . Name . Release . Namespace . Values . clusterDomain . Values . postgresql . global . postgresql . service . ports . postgresql - } }
{ { - end - } }
{ { - end - } }
2020-09-28 23:26:06 +00:00
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - define "redis.dns" - } }
2024-07-07 09:57:16 +00:00
{ { - if and ( ( index . Values "redis-cluster" ) . enabled ) ( ( index . Values "redis" ) . enabled ) - } }
{ { - fail "redis and redis-cluster cannot be enabled at the same time. Please only choose one." - } }
{ { - else if ( index . Values "redis-cluster" ) . enabled - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - printf "redis+cluster://:%s@%s-redis-cluster-headless.%s.svc.%s:%g/0?pool_size=100&idle_timeout=180s&" ( index . Values "redis-cluster" ) . global . redis . password . Release . Name . Release . Namespace . Values . clusterDomain ( index . Values "redis-cluster" ) . service . ports . redis - } }
2024-07-07 09:57:16 +00:00
{ { - else if ( index . Values "redis" ) . enabled - } }
{ { - printf "redis://:%s@%s-redis-headless.%s.svc.%s:%g/0?pool_size=100&idle_timeout=180s&" ( index . Values "redis" ) . global . redis . password . Release . Name . Release . Namespace . Values . clusterDomain ( index . Values "redis" ) . master . service . ports . redis - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - end - } }
{ { - end - } }
{ { - define "redis.port" - } }
{ { - if ( index . Values "redis-cluster" ) . enabled - } }
{ { ( index . Values "redis-cluster" ) . service . ports . redis } }
2024-07-07 09:57:16 +00:00
{ { - else if ( index . Values "redis" ) . enabled - } }
{ { ( index . Values "redis" ) . master . service . ports . redis } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - end - } }
{ { - end - } }
{ { - define "redis.servicename" - } }
{ { - if ( index . Values "redis-cluster" ) . enabled - } }
{ { - printf "%s-redis-cluster-headless.%s.svc.%s" . Release . Name . Release . Namespace . Values . clusterDomain - } }
2024-07-07 09:57:16 +00:00
{ { - else if ( index . Values "redis" ) . enabled - } }
{ { - printf "%s-redis-headless.%s.svc.%s" . Release . Name . Release . Namespace . Values . clusterDomain - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - end - } }
2019-12-12 13:38:31 -05:00
{ { - end - } }
2020-08-23 17:56:55 +00:00
{ { - define "gitea.default_domain" - } }
2023-11-14 22:23:01 +00:00
{ { - printf "%s-http.%s.svc.%s" ( include "gitea.fullname" . ) . Release . Namespace . Values . clusterDomain - } }
2019-12-12 13:38:31 -05:00
{ { - end - } }
2020-12-16 20:37:47 +08:00
{ { - define "gitea.ldap_settings" - } }
2021-10-08 20:16:24 +08:00
{ { - $idx : = index . 0 } }
{ { - $values : = index . 1 } }
{ { - if not ( hasKey $values "bindDn" ) - } }
{ { - $_ : = set $values "bindDn" "" - } }
2021-06-10 19:13:33 +08:00
{ { - end - } }
2021-10-08 20:16:24 +08:00
{ { - if not ( hasKey $values "bindPassword" ) - } }
{ { - $_ : = set $values "bindPassword" "" - } }
2021-06-10 19:13:33 +08:00
{ { - end - } }
2021-07-06 13:28:13 +08:00
{ { - $flags : = list "notActive" "skipTlsVerify" "allowDeactivateAll" "synchronizeUsers" "attributesInBind" - } }
2021-10-08 20:16:24 +08:00
{ { - range $key , $val : = $values - } }
2021-06-10 19:13:33 +08:00
{ { - if and ( ne $key "enabled" ) ( ne $key "existingSecret" ) - } }
2021-07-06 13:28:13 +08:00
{ { - if eq $key "bindDn" - } }
2021-10-08 20:16:24 +08:00
{ { - printf "--%s \"${GITEA_LDAP_BIND_DN_%d}\" " ( $key | kebabcase ) ( $idx ) - } }
2021-07-06 13:28:13 +08:00
{ { - else if eq $key "bindPassword" - } }
2021-10-08 20:16:24 +08:00
{ { - printf "--%s \"${GITEA_LDAP_PASSWORD_%d}\" " ( $key | kebabcase ) ( $idx ) - } }
2021-06-10 19:13:33 +08:00
{ { - else if eq $key "port" - } }
2021-07-06 13:28:13 +08:00
{ { - printf "--%s %d " $key ( $val | int ) - } }
{ { - else if has $key $flags - } }
2021-06-30 04:09:16 +08:00
{ { - printf "--%s " ( $key | kebabcase ) - } }
2020-12-16 20:37:47 +08:00
{ { - else - } }
2021-06-30 04:09:16 +08:00
{ { - printf "--%s %s " ( $key | kebabcase ) ( $val | squote ) - } }
2020-12-16 20:37:47 +08:00
{ { - end - } }
{ { - end - } }
{ { - end - } }
2021-03-01 20:24:11 +08:00
{ { - end - } }
{ { - define "gitea.oauth_settings" - } }
2021-12-20 22:43:55 +08:00
{ { - $idx : = index . 0 } }
{ { - $values : = index . 1 } }
{ { - if not ( hasKey $values "key" ) - } }
{ { - $_ : = set $values "key" ( printf "${GITEA_OAUTH_KEY_%d}" $idx ) - } }
{ { - end - } }
{ { - if not ( hasKey $values "secret" ) - } }
{ { - $_ : = set $values "secret" ( printf "${GITEA_OAUTH_SECRET_%d}" $idx ) - } }
{ { - end - } }
{ { - range $key , $val : = $values - } }
2021-12-21 18:59:18 +08:00
{ { - if ne $key "existingSecret" - } }
2021-12-20 22:43:55 +08:00
{ { - printf "--%s %s " ( $key | kebabcase ) ( $val | quote ) - } }
2021-03-01 20:24:11 +08:00
{ { - end - } }
{ { - end - } }
2021-04-29 17:12:48 +08:00
{ { - end - } }
2021-12-22 18:44:04 +08:00
2022-07-28 16:29:33 +08:00
{ { - define "gitea.public_protocol" - } }
{ { - if and . Values . ingress . enabled ( gt ( len . Values . ingress . tls ) 0 ) - } }
https
{ { - else - } }
{ { . Values . gitea . config . server . PROTOCOL } }
{ { - end - } }
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { - define "gitea.act_runner.local_root_url" - } }
{ { - if not . Values . gitea . config . server . LOCAL_ROOT_URL - } }
{ { - printf "http://%s-http:%.0f" ( include "gitea.fullname" . ) . Values . service . http . port - } }
{ { - else - } }
{ { / * fallback for allowing to overwrite this value via inline config * / } }
{ { - . Values . gitea . config . server . LOCAL_ROOT_URL - } }
{ { - end - } }
{ { - end - } }
2021-12-22 18:44:04 +08:00
{ { - define "gitea.inline_configuration" - } }
{ { - include "gitea.inline_configuration.init" . - } }
{ { - include "gitea.inline_configuration.defaults" . - } }
{ { - $generals : = list - } }
{ { - $inlines : = dict - } }
{ { - range $key , $value : = . Values . gitea . config } }
{ { - if kindIs "map" $value } }
{ { - if gt ( len $value ) 0 } }
{ { - $section : = default list ( get $inlines $key ) - } }
{ { - range $n_key , $n_value : = $value } }
{ { - $section = append $section ( printf "%s=%v" $n_key $n_value ) - } }
{ { - end } }
{ { - $_ : = set $inlines $key ( join "\n" $section ) - } }
{ { - end - } }
{ { - else } }
{ { - if or ( eq $key "APP_NAME" ) ( eq $key "RUN_USER" ) ( eq $key "RUN_MODE" ) - } }
{ { - $generals = append $generals ( printf "%s=%s" $key $value ) - } }
{ { - else - } }
{ { - ( printf "Key %s cannot be on top level of configuration" $key ) | fail - } }
{ { - end - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
2021-12-22 18:44:04 +08:00
{ { - end } }
{ { - end } }
{ { - $_ : = set $inlines "_generals_" ( join "\n" $generals ) - } }
{ { - toYaml $inlines - } }
{ { - end - } }
{ { - define "gitea.inline_configuration.init" - } }
{ { - if not ( hasKey . Values . gitea . config "cache" ) - } }
{ { - $_ : = set . Values . gitea . config "cache" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "server" ) - } }
{ { - $_ : = set . Values . gitea . config "server" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "metrics" ) - } }
{ { - $_ : = set . Values . gitea . config "metrics" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "database" ) - } }
{ { - $_ : = set . Values . gitea . config "database" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "security" ) - } }
{ { - $_ : = set . Values . gitea . config "security" dict - } }
{ { - end - } }
{ { - if not . Values . gitea . config . repository - } }
{ { - $_ : = set . Values . gitea . config "repository" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "oauth2" ) - } }
{ { - $_ : = set . Values . gitea . config "oauth2" dict - } }
{ { - end - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - if not ( hasKey . Values . gitea . config "session" ) - } }
{ { - $_ : = set . Values . gitea . config "session" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "queue" ) - } }
{ { - $_ : = set . Values . gitea . config "queue" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "queue.issue_indexer" ) - } }
{ { - $_ : = set . Values . gitea . config "queue.issue_indexer" dict - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config "indexer" ) - } }
{ { - $_ : = set . Values . gitea . config "indexer" dict - } }
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { - if not ( hasKey . Values . gitea . config "actions" ) - } }
{ { - $_ : = set . Values . gitea . config "actions" dict - } }
{ { - end - } }
2021-12-22 18:44:04 +08:00
{ { - end - } }
{ { - define "gitea.inline_configuration.defaults" - } }
{ { - include "gitea.inline_configuration.defaults.server" . - } }
{ { - include "gitea.inline_configuration.defaults.database" . - } }
{ { - if not . Values . gitea . config . repository . ROOT - } }
{ { - $_ : = set . Values . gitea . config . repository "ROOT" "/data/git/gitea-repositories" - } }
{ { - end - } }
{ { - if not . Values . gitea . config . security . INSTALL_LOCK - } }
{ { - $_ : = set . Values . gitea . config . security "INSTALL_LOCK" "true" - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config . metrics "ENABLED" ) - } }
{ { - $_ : = set . Values . gitea . config . metrics "ENABLED" . Values . gitea . metrics . enabled - } }
{ { - end - } }
2024-11-30 13:59:29 +00:00
{ { - if and ( not ( hasKey . Values . gitea . config . metrics "TOKEN" ) ) ( . Values . gitea . metrics . token ) ( . Values . gitea . metrics . enabled ) - } }
{ { - $_ : = set . Values . gitea . config . metrics "TOKEN" . Values . gitea . metrics . token - } }
{ { - end - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - / * redis queue * / - } }
2024-07-07 09:57:16 +00:00
{ { - if or ( ( index . Values "redis-cluster" ) . enabled ) ( ( index . Values "redis" ) . enabled ) - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - $_ : = set . Values . gitea . config . queue "TYPE" "redis" - } }
{ { - $_ : = set . Values . gitea . config . queue "CONN_STR" ( include "redis.dns" . ) - } }
{ { - $_ : = set . Values . gitea . config . session "PROVIDER" "redis" - } }
{ { - $_ : = set . Values . gitea . config . session "PROVIDER_CONFIG" ( include "redis.dns" . ) - } }
2023-12-18 08:43:18 +00:00
{ { - $_ : = set . Values . gitea . config . cache "ADAPTER" "redis" - } }
{ { - $_ : = set . Values . gitea . config . cache "HOST" ( include "redis.dns" . ) - } }
{ { - else - } }
{ { - if not ( get . Values . gitea . config . session "PROVIDER" ) - } }
{ { - $_ : = set . Values . gitea . config . session "PROVIDER" "memory" - } }
{ { - end - } }
{ { - if not ( get . Values . gitea . config . session "PROVIDER_CONFIG" ) - } }
{ { - $_ : = set . Values . gitea . config . session "PROVIDER_CONFIG" "" - } }
{ { - end - } }
{ { - if not ( get . Values . gitea . config . queue "TYPE" ) - } }
{ { - $_ : = set . Values . gitea . config . queue "TYPE" "level" - } }
{ { - end - } }
{ { - if not ( get . Values . gitea . config . queue "CONN_STR" ) - } }
{ { - $_ : = set . Values . gitea . config . queue "CONN_STR" "" - } }
{ { - end - } }
{ { - if not ( get . Values . gitea . config . cache "ADAPTER" ) - } }
{ { - $_ : = set . Values . gitea . config . cache "ADAPTER" "memory" - } }
{ { - end - } }
{ { - if not ( get . Values . gitea . config . cache "HOST" ) - } }
{ { - $_ : = set . Values . gitea . config . cache "HOST" "" - } }
{ { - end - } }
2023-07-18 19:22:51 +02:00
{ { - end - } }
[Breaking] Add HA-support; switch to `Deployment` (#437)
# Changes
A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.
## Documentation
- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
Most of the information below should go into it with more details and explanations behind all of the individual components.
## Chart deps
~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`
## Adds smart HA chart logic
The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.
- If `replicaCount` > 1,
- `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
- `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
- `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not
Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.
## Deployment vs Statefulset
Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.
Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.
This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?
## Chart PVC Creation
I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.
A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...
- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC
## Testing
As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:
```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.
I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.
Examplary `values.yml` for testing (only needs a valid RWX storage class):
<details>
<summary>values.yaml</summary>
```yml
image:
tag: "dev"
PullPolicy: "Always"
rootless: true
replicaCount: 2
persistence:
enabled: true
accessModes:
- ReadWriteMany
storageClass: FIXME
redis-cluster:
enabled: false
global:
redis:
password: gitea
gitea:
config:
indexer:
ISSUE_INDEXER_ENABLED: true
REPO_INDEXER_ENABLED: false
```
</details>
## Preferred setup
The preferred HA setup with respect to performance and stability might currently be as follows:
- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)
This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.
fix #98
Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
2023-07-17 19:09:42 +00:00
{ { - if not . Values . gitea . config . indexer . ISSUE_INDEXER_TYPE - } }
{ { - $_ : = set . Values . gitea . config . indexer "ISSUE_INDEXER_TYPE" "db" - } }
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { - if not . Values . gitea . config . actions . ENABLED - } }
{ { - $_ : = set . Values . gitea . config . actions "ENABLED" ( ternary "true" "false" . Values . actions . enabled ) - } }
{ { - end - } }
2021-12-22 18:44:04 +08:00
{ { - end - } }
{ { - define "gitea.inline_configuration.defaults.server" - } }
{ { - if not ( hasKey . Values . gitea . config . server "HTTP_PORT" ) - } }
{ { - $_ : = set . Values . gitea . config . server "HTTP_PORT" . Values . service . http . port - } }
{ { - end - } }
{ { - if not . Values . gitea . config . server . PROTOCOL - } }
{ { - $_ : = set . Values . gitea . config . server "PROTOCOL" "http" - } }
{ { - end - } }
{ { - if not ( . Values . gitea . config . server . DOMAIN ) - } }
{ { - if gt ( len . Values . ingress . hosts ) 0 - } }
2023-11-06 19:03:46 +00:00
{ { - $_ : = set . Values . gitea . config . server "DOMAIN" ( tpl ( index . Values . ingress . hosts 0 ) . host $ ) - } }
2021-12-22 18:44:04 +08:00
{ { - else - } }
{ { - $_ : = set . Values . gitea . config . server "DOMAIN" ( include "gitea.default_domain" . ) - } }
{ { - end - } }
{ { - end - } }
{ { - if not . Values . gitea . config . server . ROOT_URL - } }
2022-07-28 16:29:33 +08:00
{ { - $_ : = set . Values . gitea . config . server "ROOT_URL" ( printf "%s://%s" ( include "gitea.public_protocol" . ) . Values . gitea . config . server . DOMAIN ) - } }
2021-12-22 18:44:04 +08:00
{ { - end - } }
2024-11-10 13:35:56 +00:00
{ { - if . Values . actions . enabled - } }
{ { - $_ : = set . Values . gitea . config . server "LOCAL_ROOT_URL" ( include "gitea.act_runner.local_root_url" . ) - } }
{ { - end - } }
2021-12-22 18:44:04 +08:00
{ { - if not . Values . gitea . config . server . SSH_DOMAIN - } }
{ { - $_ : = set . Values . gitea . config . server "SSH_DOMAIN" . Values . gitea . config . server . DOMAIN - } }
{ { - end - } }
{ { - if not . Values . gitea . config . server . SSH_PORT - } }
{ { - $_ : = set . Values . gitea . config . server "SSH_PORT" . Values . service . ssh . port - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config . server "SSH_LISTEN_PORT" ) - } }
{ { - if not . Values . image . rootless - } }
{ { - $_ : = set . Values . gitea . config . server "SSH_LISTEN_PORT" . Values . gitea . config . server . SSH_PORT - } }
{ { - else - } }
{ { - $_ : = set . Values . gitea . config . server "SSH_LISTEN_PORT" "2222" - } }
{ { - end - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config . server "START_SSH_SERVER" ) - } }
{ { - if . Values . image . rootless - } }
{ { - $_ : = set . Values . gitea . config . server "START_SSH_SERVER" "true" - } }
{ { - end - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config . server "APP_DATA_PATH" ) - } }
{ { - $_ : = set . Values . gitea . config . server "APP_DATA_PATH" "/data" - } }
{ { - end - } }
{ { - if not ( hasKey . Values . gitea . config . server "ENABLE_PPROF" ) - } }
{ { - $_ : = set . Values . gitea . config . server "ENABLE_PPROF" false - } }
{ { - end - } }
{ { - end - } }
{ { - define "gitea.inline_configuration.defaults.database" - } }
2023-07-18 18:34:56 +02:00
{ { - if ( index . Values "postgresql-ha" "enabled" ) - } }
2021-12-22 18:44:04 +08:00
{ { - $_ : = set . Values . gitea . config . database "DB_TYPE" "postgres" - } }
{ { - if not ( . Values . gitea . config . database . HOST ) - } }
2023-07-22 11:46:44 +00:00
{ { - $_ : = set . Values . gitea . config . database "HOST" ( include "postgresql-ha.dns" . ) - } }
2021-12-22 18:44:04 +08:00
{ { - end - } }
2023-07-18 19:16:33 +02:00
{ { - $_ : = set . Values . gitea . config . database "NAME" ( index . Values "postgresql-ha" "global" "postgresql" "database" ) - } }
{ { - $_ : = set . Values . gitea . config . database "USER" ( index . Values "postgresql-ha" "global" "postgresql" "username" ) - } }
{ { - $_ : = set . Values . gitea . config . database "PASSWD" ( index . Values "postgresql-ha" "global" "postgresql" "password" ) - } }
2021-12-22 18:44:04 +08:00
{ { - end - } }
2023-07-22 11:46:44 +00:00
{ { - if ( index . Values "postgresql" "enabled" ) - } }
{ { - $_ : = set . Values . gitea . config . database "DB_TYPE" "postgres" - } }
{ { - if not ( . Values . gitea . config . database . HOST ) - } }
{ { - $_ : = set . Values . gitea . config . database "HOST" ( include "postgresql.dns" . ) - } }
{ { - end - } }
{ { - $_ : = set . Values . gitea . config . database "NAME" . Values . postgresql . global . postgresql . auth . database - } }
{ { - $_ : = set . Values . gitea . config . database "USER" . Values . postgresql . global . postgresql . auth . username - } }
{ { - $_ : = set . Values . gitea . config . database "PASSWD" . Values . postgresql . global . postgresql . auth . password - } }
{ { - end - } }
2021-12-22 18:44:04 +08:00
{ { - end - } }
2022-08-08 03:32:19 +08:00
{ { - define "gitea.init-additional-mounts" - } }
{ { - / * Honor the deprecated extraVolumeMounts variable when defined * / - } }
{ { - if gt ( len . Values . extraInitVolumeMounts ) 0 - } }
{ { - toYaml . Values . extraInitVolumeMounts - } }
{ { - else if gt ( len . Values . extraVolumeMounts ) 0 - } }
{ { - toYaml . Values . extraVolumeMounts - } }
{ { - end - } }
{ { - end - } }
{ { - define "gitea.container-additional-mounts" - } }
{ { - / * Honor the deprecated extraVolumeMounts variable when defined * / - } }
{ { - if gt ( len . Values . extraContainerVolumeMounts ) 0 - } }
{ { - toYaml . Values . extraContainerVolumeMounts - } }
{ { - else if gt ( len . Values . extraVolumeMounts ) 0 - } }
{ { - toYaml . Values . extraVolumeMounts - } }
{ { - end - } }
{ { - end - } }
2023-01-18 00:58:10 +08:00
{ { - define "gitea.gpg-key-secret-name" - } }
{ { default ( printf "%s-gpg-key" ( include "gitea.fullname" . ) ) . Values . signing . existingSecret } }
{ { - end - } }
2023-05-31 08:47:58 +00:00
{ { - define "gitea.serviceAccountName" - } }
{ { . Values . serviceAccount . name | default ( include "gitea.fullname" . ) } }
{ { - end - } }
2024-07-07 09:59:29 +00:00
{ { - define "gitea.admin.passwordMode" - } }
{ { - if has . Values . gitea . admin . passwordMode ( tuple "keepUpdated" "initialOnlyNoReset" "initialOnlyRequireReset" ) - } }
{ { . Values . gitea . admin . passwordMode } }
{ { - else - } }
{ { printf "gitea.admin.passwordMode must be set to one of 'keepUpdated', 'initialOnlyNoReset', or 'initialOnlyRequireReset'. Received: '%s'" . Values . gitea . admin . passwordMode | fail } }
{ { - end - } }
{ { - end - } }
2024-10-18 15:09:14 +00:00
{ { / * Create a functioning probe object for rendering . Given argument must be either a livenessProbe , readinessProbe , or startupProbe * / } }
{ { - define "gitea.deployment.probe" - } }
{ { - $probe : = unset . "enabled" - } }
{ { - $probeKeys : = keys $probe - } }
{ { - $containsCustomMethod : = false - } }
{ { - $chartDefaultMethod : = "tcpSocket" - } }
{ { - $nonChartDefaultMethods : = list "exec" "httpGet" "grpc" - } }
{ { - range $probeKeys - } }
{ { - if has . $nonChartDefaultMethods - } }
{ { - $containsCustomMethod = true - } }
{ { - end - } }
{ { - end - } }
{ { - if $containsCustomMethod - } }
{ { - $probe = unset . $chartDefaultMethod - } }
{ { - end - } }
{ { - toYaml $probe - } }
{ { - end - } }
2024-11-30 13:59:29 +00:00
{ { - define "gitea.metrics-secret-name" - } }
{ { default ( printf "%s-metrics-secret" ( include "gitea.fullname" . ) ) } }
{ { - end - } }