[Breaking] Add HA-support; switch to Deployment
(#437)
# Changes A big shoutout to @luhahn for all his work in #205 which served as the base for this PR. ## Documentation - [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long). Most of the information below should go into it with more details and explanations behind all of the individual components. ## Chart deps ~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~ ~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~ - Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2. - Removes `memcached` instead of `redis-cluster` - Add `postgresql-ha` as default DB dep in favor of `postgres` ## Adds smart HA chart logic The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1. - If `replicaCount` > 1, - `gitea.config.session.PROVIDER` is automatically set to `redis-cluster` - `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch` - `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc. ## Deployment vs Statefulset Given all the discussions about this lately (#428), I think we could use both. In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets. On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment. Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption. Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case. This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet. The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way? ## Chart PVC Creation I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation. In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial. A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse... - New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim` - New `persistence.create`: whether to create a new PVC ## Testing As this PR does a lot of things, we need proper testing. The helm chart can be installed from the Git branch via `helm-git` as follows: ``` helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment helm install gitea --version 0.0.0 ``` It is **highly recommended** to test the chart in a dedicated namespace. I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine. I just did some basic operations though and we should do more niche testing before merging. Examplary `values.yml` for testing (only needs a valid RWX storage class): <details> <summary>values.yaml</summary> ```yml image: tag: "dev" PullPolicy: "Always" rootless: true replicaCount: 2 persistence: enabled: true accessModes: - ReadWriteMany storageClass: FIXME redis-cluster: enabled: false global: redis: password: gitea gitea: config: indexer: ISSUE_INDEXER_ENABLED: true REPO_INDEXER_ENABLED: false ``` </details> ## Preferred setup The preferred HA setup with respect to performance and stability might currently be as follows: - Repos: RWX (e.g. EFS or Azurefiles NFS) - Issue indexer: Meilisearch (HA) - Session and cache: Redis Cluster (HA) - Attachments/Avatars: Minio (HA) This will result in a ~ 10-pod HA setup overall. All pods have very low resource requests. fix #98 Co-authored-by: pat-s <pat-s@noreply.gitea.io> Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437 Co-authored-by: pat-s <patrick.schratz@gmail.com> Co-committed-by: pat-s <patrick.schratz@gmail.com>
This commit is contained in:
parent
f66a192d45
commit
8e27bb9bae
13
Chart.lock
13
Chart.lock
@ -1,9 +1,12 @@
|
||||
dependencies:
|
||||
- name: memcached
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 6.3.14
|
||||
- name: postgresql
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 12.4.1
|
||||
digest: sha256:02d4846bf416038a42658dbca8f8001d0e3ce967b00e990048f8d420065c33fd
|
||||
generated: "2023-04-28T09:32:05.295167+02:00"
|
||||
- name: postgresql-ha
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 11.6.1
|
||||
- name: redis-cluster
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 8.4.4
|
||||
digest: sha256:3b203051c9fb8df9e771a4d67c276190a1c63aae9bf980ef3676e2a51b2f56c7
|
||||
generated: "2023-05-13T21:47:51.823348+02:00"
|
||||
|
16
Chart.yaml
16
Chart.yaml
@ -34,13 +34,17 @@ maintainers:
|
||||
# Bitnami charts are served from GitHub CDN - See https://github.com/bitnami/charts/issues/10539 for details
|
||||
dependencies:
|
||||
# OCI registry: https://blog.bitnami.com/2023/01/bitnami-helm-charts-available-as-oci.html (2023-01)
|
||||
# Chart release date: 2023-04
|
||||
- name: memcached
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 6.3.14
|
||||
condition: memcached.enabled
|
||||
# Chart release date: 2023-04
|
||||
- name: postgresql
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 12.4.1
|
||||
condition: postgresql.enabled
|
||||
# Chart release date: 2023-05
|
||||
- name: postgresql-ha
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 11.6.1
|
||||
condition: postgresql-ha.enabled
|
||||
# Chart release date: 2023-04
|
||||
- name: redis-cluster
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 8.4.4
|
||||
condition: redis-cluster.enabled
|
||||
|
175
docs/ha-setup.md
Normal file
175
docs/ha-setup.md
Normal file
@ -0,0 +1,175 @@
|
||||
# High Availability
|
||||
|
||||
**Experimental**
|
||||
|
||||
All components (in-memory DB, volume/asset storage, code indexer) used by Gitea must be deployed in a HA-ready fashion to achieve a full HA-ready Gitea deployment.
|
||||
The following document explains how to achieve this for all individual components.
|
||||
|
||||
The resulting Gitea deployment will consist of ~ 10 pods (depending on the chosen components and their replicas).
|
||||
One should evaluate upfront whether a HA-deployment is required as switching between HA/non-HA comes with some effort.
|
||||
For production instances, HA is always recommended to increase uptime and have a frictionless update process.
|
||||
|
||||
A general comment about chart dependencies and external services:
|
||||
Instead of relying on chart dependencies, it is often better to rely on an external, (managed) instances (in-memory database, asset storage provider, database, etc.).
|
||||
Many cloud providers offer such services, at least for databases or in-memory databases.
|
||||
They might cost a bit more than using a self-hosted k8s variant but are usually easier to maintain and scale, if needed.
|
||||
Also they can be centrally managed and are not linked to the Gitea helm chart or namespace.
|
||||
Please consider using external services before you start with your Gitea HA setup, it will make your life (and the life of the Gitea maintainers) easier.
|
||||
|
||||
This helm chart tries to help as much as possible to simplify and assert the provisioning of a HA-ready Gitea instance by implementing smart conditionals if `replicaCount` is set to a value > 1.
|
||||
Nevertheless, we cannot guarantee for every possible combination of Gitea settings to work together perfectly in a HA setup.
|
||||
As a general advice, we recommend to have a test environment aside on which to test possible changes/upgrades before applying these to a production installation.
|
||||
|
||||
## Requirements for HA
|
||||
|
||||
Storage-wise, the HA-Gitea setup requires a RWX file-system which can be shared among the deployment-based replica pods.
|
||||
In addition, the following components are required for full HA-readiness:
|
||||
|
||||
- A HA-ready issue (and optionally code) indexer: `elasticsearch` or `meilisearch`
|
||||
- A HA-ready external object/asset storage (`minio`) (optional, assets can also be stored on the RWX file-system)
|
||||
- A HA-ready cache (`redis-cluster`)
|
||||
- A HA-ready DB
|
||||
|
||||
`postgres.enabled`, which default to `true`, must be set to `false` for a HA setup.
|
||||
The default `postgres` chart dependency is not HA-ready (there's a dedicated `postgres-ha` chart).
|
||||
|
||||
The following sections discuss each of the components in more detail.
|
||||
Note that for each component discussed, the shown configurations only provides a (working) starting point, not necessarily the most optimal setup.
|
||||
We try to optimize this document over time as we have gained more experience with HA setups from users.
|
||||
|
||||
## Indexers (Issues and code/repo)
|
||||
|
||||
The default code indexer `bleve` is not able to allow multiple connections and hence cannot be used in a HA setup.
|
||||
Alternatives are `elasticsearch` and `meilisearch` (as of >= 1.19.2).
|
||||
Unless you have an existing `elasticsearch` cluster, we recommend using `meilisearch` as it is faster and requires way less resources.
|
||||
|
||||
Unfortunately, `meilisearch` does only support the `ISSUE_INDEXER` and not the `REPO_INDEXER` yet ([tracking issue](https://github.com/go-gitea/gitea/pull/24149)).
|
||||
This means that the `REPO_INDEXER` must still be disabled for a HA setup right now.
|
||||
An alternative to the two options above for the `ISSUE_INDEXER` is `"db"`, however we recommend to just go with `meilisearch` in this case and to not bother the DB with indexing.
|
||||
|
||||
To configure `meilisearch` within Gitea, do the following:
|
||||
|
||||
```yml
|
||||
gitea:
|
||||
config:
|
||||
indexer:
|
||||
ISSUE_INDEXER_CONN_STR: <http://meilisearch.<namespace>.svc.cluster.local:7700>
|
||||
ISSUE_INDEXER_ENABLED: true
|
||||
ISSUE_INDEXER_TYPE: meilisearch
|
||||
REPO_INDEXER_ENABLED: false
|
||||
# REPO_INDEXER_TYPE: meilisearch # not yet working
|
||||
```
|
||||
|
||||
Unfortunately `meilisearch` cannot be deployed in HA as of now.
|
||||
Nevertheless it allows for multiple Gitea requests at the same time and is therefore required in a HA setup.
|
||||
|
||||
Exemplary configuration for the [meilisearch-kubernetes](https://github.com/meilisearch/meilisearch-kubernetes/tree/main/charts/meilisearch) chart:
|
||||
|
||||
```yaml
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
size: 5Gi
|
||||
```
|
||||
|
||||
## Cache, session and queue
|
||||
|
||||
A `redis` instance is required for the in-memory cache.
|
||||
Two options exist:
|
||||
|
||||
- `redis`
|
||||
- `redis-cluster`
|
||||
|
||||
The chart provides `redis-cluster` as a dependency as this one can be used for both HA and non-HA setups.
|
||||
You're also welcome to go with `redis` if you prefer or already have a running instance.
|
||||
|
||||
It should be noted that `redis-cluster` support is only available starting with Gitea 1.19.2.
|
||||
You can also configure an external (managed) `redis` instance to be used.
|
||||
To do so, you need to set the following configuration values yourself:
|
||||
|
||||
- `gitea.config.queue.TYPE`: redis`
|
||||
- `gitea.config.queue.CONN_STR`: `<your redis connection string>`
|
||||
|
||||
- `gitea.config.session.PROVIDER`: `redis`
|
||||
- `gitea.config.session.PROVIDER_CONFIG`: `<your redis connection string>`
|
||||
|
||||
- `gitea.config.cache.ENABLED`: `true`
|
||||
- `gitea.config.cache.ADAPTER`: `redis`
|
||||
- `gitea.config.cache.HOST`: `<your redis connection string>`
|
||||
|
||||
## Object and asset storage
|
||||
|
||||
Object/asset storage refers to the storage of attachments, avatars, LFS files, etc.
|
||||
While most of these can be stored on the RWX file-system, it is recommended to use an external S3-compatible object storage for such, mainly for performance reasons.
|
||||
|
||||
By default the chart provisions a single RWO volume to store everything (repos, avatars, packages, etc.).
|
||||
This volume cannot be mounted by multiple pods.
|
||||
Hence, a RWX volume is required and (optionally) an external HA-ready object storage.
|
||||
|
||||
> **Note:** Double-check that the file permissions are set correctly on the RWX volume! That is everything should be owned by the `git` user which usually has `uid=1000` and `gid=1000`.
|
||||
|
||||
To use `minio` you need to deploy and configure an external `minio` instance yourself and explicitly define the `STORAGE_TYPE` values as shown below.
|
||||
|
||||
Note that `MINIO_BUCKET` here is just a name and does not refer to a S3 bucket.
|
||||
It's the root access point for all objects belonging to the respective application, i.e., to Gitea in this case.
|
||||
|
||||
```yaml
|
||||
gitea:
|
||||
config:
|
||||
attachment:
|
||||
STORAGE_TYPE: minio
|
||||
lfs:
|
||||
STORAGE_TYPE: minio
|
||||
picture:
|
||||
AVATAR_STORAGE_TYPE: minio
|
||||
"storage.packages":
|
||||
STORAGE_TYPE: minio
|
||||
|
||||
storage:
|
||||
MINIO_ENDPOINT: <minio-headless.<namespace>.svc.cluster.local:9000>
|
||||
MINIO_LOCATION: <location>
|
||||
MINIO_ACCESS_KEY_ID: <access key>
|
||||
MINIO_SECRET_ACCESS_KEY: <secret key>
|
||||
MINIO_BUCKET: <bucket name>
|
||||
MINIO_USE_SSL: false
|
||||
```
|
||||
|
||||
Exemplary configuration for the [bitnami minio](https://github.com/bitnami/charts/blob/main/bitnami/minio) chart:
|
||||
|
||||
```yaml
|
||||
auth:
|
||||
rootUser: minio
|
||||
mode: distributed
|
||||
replicaCount: 4
|
||||
persistence:
|
||||
enabled: true
|
||||
size: 20Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
```
|
||||
|
||||
## Database
|
||||
|
||||
If you do not have an HA-ready DB, using a managed database service in the cloud might be the easiest and most robust solution.
|
||||
Remember: disable the built-in `postgres` dependency and configure the database connection manually via `gitea.config.database`:
|
||||
|
||||
```yml
|
||||
gitea:
|
||||
database:
|
||||
builtIn:
|
||||
postgresql:
|
||||
enabled: false
|
||||
config:
|
||||
database:
|
||||
DB_TYPE: postgres
|
||||
HOST: <host>
|
||||
NAME: <name>
|
||||
USER: <user>
|
||||
```
|
||||
|
||||
## Known issues
|
||||
|
||||
- Currently Cron jobs are run on all replicas as no leader election is implemented.
|
||||
See [https://github.com/go-gitea/gitea/issues/13791](https://github.com/go-gitea/gitea/issues/13791) for a discussion and possible solution.
|
||||
|
||||
- Running with multiple replicas slows down Gitea a bit, i.e. page loading time increases.
|
@ -2,6 +2,27 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
|
||||
{{- /* multiple replicas assertions */ -}}
|
||||
{{- if gt .Values.replicaCount 1.0 -}}
|
||||
{{- fail "When using multiple replicas, a RWX file system is required" -}}
|
||||
{{- if eq (get (.Values.persistence.accessModes 0) "ReadWriteOnce") -}}
|
||||
{{- fail "When using multiple replicas, a RWX file system is required" -}}
|
||||
{{- end }}
|
||||
|
||||
{{- if eq (get .Values.gitea.config.indexer "ISSUE_INDEXER_TYPE") "bleve" -}}
|
||||
{{- fail "When using multiple replicas, the repo indexer must be set to 'meilisearch' or 'elasticsearch'" -}}
|
||||
{{- end }}
|
||||
|
||||
{{- if and (eq .Values.gitea.config.indexer.REPO_INDEXER_TYPE "bleve") (eq .Values.gitea.config.indexer.REPO_INDEXER_ENABLED "true") -}}
|
||||
{{- fail "When using multiple replicas, the repo indexer must be set to 'meilisearch' or 'elasticsearch'" -}}
|
||||
{{- end }}
|
||||
|
||||
{{- if eq .Values.gitea.config.indexer.ISSUE_INDEXER_TYPE "bleve" -}}
|
||||
{{- (printf "DEBUG: When using multiple replicas, the repo indexer must be set to 'meilisearch' or 'elasticsearch'") | fail -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "gitea.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
@ -95,8 +116,22 @@ app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- printf "%s-postgresql.%s.svc.%s:%g" .Release.Name .Release.Namespace .Values.clusterDomain .Values.postgresql.global.postgresql.service.ports.postgresql -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "memcached.dns" -}}
|
||||
{{- printf "%s-memcached.%s.svc.%s:%g" .Release.Name .Release.Namespace .Values.clusterDomain .Values.memcached.service.ports.memcached | trunc 63 | trimSuffix "-" -}}
|
||||
{{- define "redis.dns" -}}
|
||||
{{- if (index .Values "redis-cluster").enabled -}}
|
||||
{{- printf "redis+cluster://:%s@%s-redis-cluster-headless.%s.svc.%s:%g/0?pool_size=100&idle_timeout=180s&" (index .Values "redis-cluster").global.redis.password .Release.Name .Release.Namespace .Values.clusterDomain (index .Values "redis-cluster").service.ports.redis -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "redis.port" -}}
|
||||
{{- if (index .Values "redis-cluster").enabled -}}
|
||||
{{ (index .Values "redis-cluster").service.ports.redis }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "redis.servicename" -}}
|
||||
{{- if (index .Values "redis-cluster").enabled -}}
|
||||
{{- printf "%s-redis-cluster-headless.%s.svc.%s" .Release.Name .Release.Namespace .Values.clusterDomain -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "gitea.default_domain" -}}
|
||||
@ -182,6 +217,7 @@ https
|
||||
{{- else -}}
|
||||
{{- (printf "Key %s cannot be on top level of configuration" $key) | fail -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@ -211,6 +247,18 @@ https
|
||||
{{- if not (hasKey .Values.gitea.config "oauth2") -}}
|
||||
{{- $_ := set .Values.gitea.config "oauth2" dict -}}
|
||||
{{- end -}}
|
||||
{{- if not (hasKey .Values.gitea.config "session") -}}
|
||||
{{- $_ := set .Values.gitea.config "session" dict -}}
|
||||
{{- end -}}
|
||||
{{- if not (hasKey .Values.gitea.config "queue") -}}
|
||||
{{- $_ := set .Values.gitea.config "queue" dict -}}
|
||||
{{- end -}}
|
||||
{{- if not (hasKey .Values.gitea.config "queue.issue_indexer") -}}
|
||||
{{- $_ := set .Values.gitea.config "queue.issue_indexer" dict -}}
|
||||
{{- end -}}
|
||||
{{- if not (hasKey .Values.gitea.config "indexer") -}}
|
||||
{{- $_ := set .Values.gitea.config "indexer" dict -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "gitea.inline_configuration.defaults" -}}
|
||||
@ -226,13 +274,30 @@ https
|
||||
{{- if not (hasKey .Values.gitea.config.metrics "ENABLED") -}}
|
||||
{{- $_ := set .Values.gitea.config.metrics "ENABLED" .Values.gitea.metrics.enabled -}}
|
||||
{{- end -}}
|
||||
{{- if .Values.memcached.enabled -}}
|
||||
{{- if (index .Values "redis-cluster").enabled -}}
|
||||
{{- $_ := set .Values.gitea.config.cache "ENABLED" "true" -}}
|
||||
{{- $_ := set .Values.gitea.config.cache "ADAPTER" "memcache" -}}
|
||||
{{- $_ := set .Values.gitea.config.cache "ADAPTER" "redis" -}}
|
||||
{{- if not (.Values.gitea.config.cache.HOST) -}}
|
||||
{{- $_ := set .Values.gitea.config.cache "HOST" (include "memcached.dns" .) -}}
|
||||
{{- $_ := set .Values.gitea.config.cache "HOST" (include "redis.dns" .) -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- /* redis queue */ -}}
|
||||
{{- if (index .Values "redis-cluster").enabled -}}
|
||||
{{- $_ := set .Values.gitea.config.queue "TYPE" "redis" -}}
|
||||
{{- $_ := set .Values.gitea.config.queue "CONN_STR" (include "redis.dns" .) -}}
|
||||
{{- end -}}
|
||||
{{- /* multiple replicas */ -}}
|
||||
{{- if gt .Values.replicaCount 1.0 -}}
|
||||
{{- if not (get .Values.gitea.config.session "PROVIDER") -}}
|
||||
{{- $_ := set .Values.gitea.config.session "PROVIDER" "redis" -}}
|
||||
{{- end -}}
|
||||
{{- if not (get .Values.gitea.config.session "PROVIDER_CONFIG") -}}
|
||||
{{- $_ := set .Values.gitea.config.session "PROVIDER_CONFIG" (include "redis.dns" .) -}}
|
||||
{{- end -}}
|
||||
{{- if not .Values.gitea.config.indexer.ISSUE_INDEXER_TYPE -}}
|
||||
{{- $_ := set .Values.gitea.config.indexer "ISSUE_INDEXER_TYPE" "db" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "gitea.inline_configuration.defaults.server" -}}
|
||||
|
@ -16,6 +16,32 @@ metadata:
|
||||
{{- include "gitea.labels" . | nindent 4 }}
|
||||
type: Opaque
|
||||
stringData:
|
||||
assertions: |
|
||||
{{- /* multiple replicas assertions */ -}}
|
||||
{{- if gt .Values.replicaCount 1.0 -}}
|
||||
{{- if .Values.gitea.config.cron.GIT_GC_REPOS -}}
|
||||
{{- if .Values.gitea.config.cron.GIT_GC_REPOS.enabled -}}
|
||||
{{- fail "Invoking the garbage collector via CRON is not yet supported when running with multiple replicas. Please set 'GIT_GC_REPOS.enabled = false'." -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if eq (first .Values.persistence.accessModes) "ReadWriteOnce" -}}
|
||||
{{- fail "When using multiple replicas, a RWX file system is required and gitea.persistence.accessModes[0] must be set to ReadWriteMany." -}}
|
||||
{{- end }}
|
||||
|
||||
{{- if eq (get .Values.gitea.config.indexer "ISSUE_INDEXER_TYPE") "bleve" -}}
|
||||
{{- fail "When using multiple replicas, the issue indexer (gitea.config.indexer.ISSUE_INDEXER_TYPE) must be set to a HA-ready provider such as 'meilisearch', 'elasticsearch' or 'db' (if the DB is HA-ready)." -}}
|
||||
{{- end }}
|
||||
{{- if .Values.gitea.config.indexer.REPO_INDEXER_TYPE -}}
|
||||
{{- if eq (get .Values.gitea.config.indexer "REPO_INDEXER_TYPE") "bleve" -}}
|
||||
{{- if .Values.gitea.config.indexer.REPO_INDEXER_ENABLED -}}
|
||||
{{- if eq (get .Values.gitea.config.indexer "REPO_INDEXER_ENABLED") "true" -}}
|
||||
{{- fail "When using multiple replicas, the repo indexer (gitea.config.indexer.REPO_INDEXER_TYPE) must be set to 'meilisearch' or 'elasticsearch' or disabled." -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- end }}
|
||||
config_environment.sh: |-
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
@ -1,20 +1,27 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "gitea.fullname" . }}
|
||||
annotations:
|
||||
{{- if .Values.statefulset.annotations }}
|
||||
{{- toYaml .Values.statefulset.annotations | nindent 4 }}
|
||||
{{- if .Values.deployment.annotations }}
|
||||
{{- toYaml .Values.deployment.annotations | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "gitea.labels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
strategy:
|
||||
type: {{ .Values.strategy.type }}
|
||||
{{- if eq .Values.strategy.type "RollingUpdate" }}
|
||||
rollingUpdate:
|
||||
maxUnavailable: {{ .Values.strategy.rollingUpdate.maxUnavailable }}
|
||||
maxSurge: {{ .Values.strategy.rollingUpdate.maxSurge }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "gitea.selectorLabels" . | nindent 6 }}
|
||||
{{- if .Values.statefulset.labels }}
|
||||
{{- toYaml .Values.statefulset.labels | nindent 6 }}
|
||||
{{- if .Values.deployment.labels }}
|
||||
{{- toYaml .Values.deployment.labels | nindent 6 }}
|
||||
{{- end }}
|
||||
serviceName: {{ include "gitea.fullname" . }}
|
||||
template:
|
||||
@ -32,8 +39,8 @@ spec:
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "gitea.labels" . | nindent 8 }}
|
||||
{{- if .Values.statefulset.labels }}
|
||||
{{- toYaml .Values.statefulset.labels | nindent 8 }}
|
||||
{{- if .Values.deployment.labels }}
|
||||
{{- toYaml .Values.deployment.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.schedulerName }}
|
||||
@ -62,8 +69,8 @@ spec:
|
||||
value: /data
|
||||
- name: GITEA_TEMP
|
||||
value: /tmp/gitea
|
||||
{{- if .Values.statefulset.env }}
|
||||
{{- toYaml .Values.statefulset.env | nindent 12 }}
|
||||
{{- if .Values.deployment.env }}
|
||||
{{- toYaml .Values.deployment.env | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.signing.enabled }}
|
||||
- name: GNUPGHOME
|
||||
@ -97,8 +104,8 @@ spec:
|
||||
value: /data
|
||||
- name: GITEA_TEMP
|
||||
value: /tmp/gitea
|
||||
{{- if .Values.statefulset.env }}
|
||||
{{- toYaml .Values.statefulset.env | nindent 12 }}
|
||||
{{- if .Values.deployment.env }}
|
||||
{{- toYaml .Values.deployment.env | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.gitea.additionalConfigFromEnvs }}
|
||||
{{- toYaml .Values.gitea.additionalConfigFromEnvs | nindent 12 }}
|
||||
@ -234,8 +241,8 @@ spec:
|
||||
- name: GITEA_ADMIN_PASSWORD
|
||||
value: {{ .Values.gitea.admin.password | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.statefulset.env }}
|
||||
{{- toYaml .Values.statefulset.env | nindent 12 }}
|
||||
{{- if .Values.deployment.env }}
|
||||
{{- toYaml .Values.deployment.env | nindent 12 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: init
|
||||
@ -250,7 +257,7 @@ spec:
|
||||
{{- include "gitea.init-additional-mounts" . | nindent 12 }}
|
||||
resources:
|
||||
{{- toYaml .Values.initContainers.resources | nindent 12 }}
|
||||
terminationGracePeriodSeconds: {{ .Values.statefulset.terminationGracePeriodSeconds }}
|
||||
terminationGracePeriodSeconds: {{ .Values.deployment.terminationGracePeriodSeconds }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ include "gitea.image" . }}"
|
||||
@ -283,8 +290,8 @@ spec:
|
||||
- name: GNUPGHOME
|
||||
value: {{ .Values.signing.gpgHome }}
|
||||
{{- end }}
|
||||
{{- if .Values.statefulset.env }}
|
||||
{{- toYaml .Values.statefulset.env | nindent 12 }}
|
||||
{{- if .Values.deployment.env }}
|
||||
{{- toYaml .Values.deployment.env | nindent 12 }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: ssh
|
||||
@ -340,6 +347,10 @@ spec:
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.topologySpreadConstraints }}
|
||||
topologySpreadConstraints:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
@ -378,38 +389,13 @@ spec:
|
||||
path: private.asc
|
||||
defaultMode: 0100
|
||||
{{- end }}
|
||||
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
|
||||
{{- if .Values.persistence.enabled }}
|
||||
{{- if .Values.persistence.mount }}
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
{{- with .Values.persistence.existingClaim }}
|
||||
claimName: {{ tpl . $ }}
|
||||
{{- end }}
|
||||
claimName: {{ .Values.persistence.claimName }}
|
||||
{{- end }}
|
||||
{{- else if not .Values.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
{{- with .Values.persistence.annotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := . }}
|
||||
{{ $key }}: {{ $value }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.persistence.labels }}
|
||||
labels:
|
||||
{{- range $key, $value := . }}
|
||||
{{ $key }}: {{ $value }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{- range .Values.persistence.accessModes }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
{{- include "gitea.persistence.storageClass" . | indent 8 }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.size | quote }}
|
||||
{{- end }}
|
@ -61,6 +61,27 @@ stringData:
|
||||
echo "Gitea migrate might fail due to database connection...This init-container will try again in a few seconds"
|
||||
exit 1
|
||||
}
|
||||
|
||||
{{- if include "redis.servicename" . }}
|
||||
function test_redis_connection() {
|
||||
local RETRY=0
|
||||
local MAX=30
|
||||
|
||||
echo 'Wait for redis to become avialable...'
|
||||
until [ "${RETRY}" -ge "${MAX}" ]; do
|
||||
nc -vz -w2 {{ include "redis.servicename" . }} {{ include "redis.port" . }} && break
|
||||
RETRY=$[${RETRY}+1]
|
||||
echo "...not ready yet (${RETRY}/${MAX})"
|
||||
done
|
||||
|
||||
if [ "${RETRY}" -ge "${MAX}" ]; then
|
||||
echo "Redis not reachable after '${MAX}' attempts!"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_redis_connection
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{- if or .Values.gitea.admin.existingSecret (and .Values.gitea.admin.username .Values.gitea.admin.password) }}
|
||||
|
17
templates/gitea/poddisruptionbudget.yaml
Normal file
17
templates/gitea/poddisruptionbudget.yaml
Normal file
@ -0,0 +1,17 @@
|
||||
{{- if .Values.podDisruptionBudget -}}
|
||||
{{- if .Capabilities.APIVersions.Has "policy/v1" }}
|
||||
apiVersion: policy/v1
|
||||
{{- else }}
|
||||
apiVersion: policy/v1beta1
|
||||
{{- end }}
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ include "gitea.fullname" . }}
|
||||
labels:
|
||||
{{- include "gitea.labels" . | nindent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "gitea.selectorLabels" . | nindent 6 }}
|
||||
{{- toYaml .Values.podDisruptionBudget | nindent 2 }}
|
||||
{{- end -}}
|
24
templates/gitea/pvc.yaml
Normal file
24
templates/gitea/pvc.yaml
Normal file
@ -0,0 +1,24 @@
|
||||
{{- if and .Values.persistence.enabled .Values.persistence.create }}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ .Values.persistence.claimName }}
|
||||
namespace: {{ $.Release.Namespace }}
|
||||
annotations:
|
||||
{{ .Values.persistence.annotations | toYaml | indent 4}}
|
||||
spec:
|
||||
accessModes:
|
||||
{{- if gt .Values.replicaCount 1.0 }}
|
||||
- ReadWriteMany
|
||||
{{- else }}
|
||||
{{- .Values.persistence.accessModes | toYaml | nindent 4 }}
|
||||
{{- end }}
|
||||
volumeMode: Filesystem
|
||||
{{- if .Values.persistence.storageClass }}
|
||||
storageClassName: {{ .Values.persistence.storageClass }}
|
||||
{{- end }}
|
||||
volumeName: ""
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.size }}
|
||||
{{- end }}
|
@ -39,7 +39,9 @@ spec:
|
||||
ports:
|
||||
- name: ssh
|
||||
port: {{ .Values.service.ssh.port }}
|
||||
{{- if .Values.gitea.config.server.SSH_LISTEN_PORT }}
|
||||
targetPort: {{ .Values.gitea.config.server.SSH_LISTEN_PORT }}
|
||||
{{- end }}
|
||||
protocol: TCP
|
||||
{{- if .Values.service.ssh.nodePort }}
|
||||
nodePort: {{ .Values.service.ssh.nodePort }}
|
||||
|
@ -1,17 +1,17 @@
|
||||
suite: Statefulset template (basic)
|
||||
suite: deployment template (basic)
|
||||
release:
|
||||
name: gitea-unittests
|
||||
namespace: testing
|
||||
templates:
|
||||
- templates/gitea/statefulset.yaml
|
||||
- templates/gitea/deployment.yaml
|
||||
- templates/gitea/config.yaml
|
||||
tests:
|
||||
- it: renders a statefulset
|
||||
template: templates/gitea/statefulset.yaml
|
||||
- it: renders a deployment
|
||||
template: templates/gitea/deployment.yaml
|
||||
asserts:
|
||||
- hasDocuments:
|
||||
count: 1
|
||||
- containsDocument:
|
||||
kind: StatefulSet
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
name: gitea-unittests
|
@ -1,13 +1,13 @@
|
||||
suite: Statefulset template (signing disabled)
|
||||
suite: deployment template (signing disabled)
|
||||
release:
|
||||
name: gitea-unittests
|
||||
namespace: testing
|
||||
templates:
|
||||
- templates/gitea/statefulset.yaml
|
||||
- templates/gitea/deployment.yaml
|
||||
- templates/gitea/config.yaml
|
||||
tests:
|
||||
- it: skips gpg init container
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
asserts:
|
||||
- notContains:
|
||||
path: spec.template.spec.initContainers
|
||||
@ -15,7 +15,7 @@ tests:
|
||||
content:
|
||||
name: configure-gpg
|
||||
- it: skips gpg env in `init-directories` init container
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
signing.enabled: false
|
||||
asserts:
|
||||
@ -25,14 +25,14 @@ tests:
|
||||
name: GNUPGHOME
|
||||
value: /data/git/.gnupg
|
||||
- it: skips gpg env in runtime container
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
asserts:
|
||||
- notContains:
|
||||
path: spec.template.spec.containers[0].env
|
||||
content:
|
||||
name: GNUPGHOME
|
||||
- it: skips gpg volume spec
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
asserts:
|
||||
- notContains:
|
||||
path: spec.template.spec.volumes
|
@ -1,13 +1,13 @@
|
||||
suite: Statefulset template (signing enabled)
|
||||
suite: deployment template (signing enabled)
|
||||
release:
|
||||
name: gitea-unittests
|
||||
namespace: testing
|
||||
templates:
|
||||
- templates/gitea/statefulset.yaml
|
||||
- templates/gitea/deployment.yaml
|
||||
- templates/gitea/config.yaml
|
||||
tests:
|
||||
- it: adds gpg init container
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
signing:
|
||||
enabled: true
|
||||
@ -39,7 +39,7 @@ tests:
|
||||
mountPath: /raw
|
||||
readOnly: true
|
||||
- it: adds gpg env in `init-directories` init container
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
signing.enabled: true
|
||||
signing.existingSecret: "custom-gpg-secret"
|
||||
@ -50,7 +50,7 @@ tests:
|
||||
name: GNUPGHOME
|
||||
value: /data/git/.gnupg
|
||||
- it: adds gpg env in runtime container
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
signing.enabled: true
|
||||
signing.existingSecret: "custom-gpg-secret"
|
||||
@ -61,7 +61,7 @@ tests:
|
||||
name: GNUPGHOME
|
||||
value: /data/git/.gnupg
|
||||
- it: adds gpg volume spec
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
signing:
|
||||
enabled: true
|
||||
@ -78,7 +78,7 @@ tests:
|
||||
path: private.asc
|
||||
defaultMode: 0100
|
||||
- it: supports gpg volume spec with external reference
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
signing:
|
||||
enabled: true
|
@ -1,13 +1,13 @@
|
||||
suite: Statefulset template (SSH configuration)
|
||||
suite: deployment template (SSH configuration)
|
||||
release:
|
||||
name: gitea-unittests
|
||||
namespace: testing
|
||||
templates:
|
||||
- templates/gitea/statefulset.yaml
|
||||
- templates/gitea/deployment.yaml
|
||||
- templates/gitea/config.yaml
|
||||
tests:
|
||||
- it: supports defining SSH log level for root based image
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
image.rootless: false
|
||||
asserts:
|
||||
@ -17,7 +17,7 @@ tests:
|
||||
name: SSH_LOG_LEVEL
|
||||
value: "INFO"
|
||||
- it: supports overriding SSH log level
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
image.rootless: false
|
||||
gitea.ssh.logLevel: "DEBUG"
|
||||
@ -28,7 +28,7 @@ tests:
|
||||
name: SSH_LOG_LEVEL
|
||||
value: "DEBUG"
|
||||
- it: skips SSH_LOG_LEVEL for rootless image
|
||||
template: templates/gitea/statefulset.yaml
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
image.rootless: true
|
||||
gitea.ssh.logLevel: "DEBUG" # explicitly defining a non-standard level here
|
@ -4,24 +4,24 @@ release:
|
||||
namespace: testing
|
||||
templates:
|
||||
- templates/gitea/serviceaccount.yaml
|
||||
- templates/gitea/statefulset.yaml
|
||||
- templates/gitea/deployment.yaml
|
||||
- templates/gitea/config.yaml
|
||||
tests:
|
||||
- it: does not modify the StatefulSet by default
|
||||
template: templates/gitea/statefulset.yaml
|
||||
- it: does not modify the deployment by default
|
||||
template: templates/gitea/deployment.yaml
|
||||
asserts:
|
||||
- notExists:
|
||||
path: spec.serviceAccountName
|
||||
- it: adds the reference to the StatefulSet with serviceAccount.create=true
|
||||
template: templates/gitea/statefulset.yaml
|
||||
- it: adds the reference to the deployment with serviceAccount.create=true
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
serviceAccount.create: true
|
||||
asserts:
|
||||
- equal:
|
||||
path: spec.template.spec.serviceAccountName
|
||||
value: gitea-unittests
|
||||
- it: allows referencing an externally created ServiceAccount to the StatefulSet
|
||||
template: templates/gitea/statefulset.yaml
|
||||
- it: allows referencing an externally created ServiceAccount to the deployment
|
||||
template: templates/gitea/deployment.yaml
|
||||
set:
|
||||
serviceAccount:
|
||||
create: false # explicitly set to define rendering behavior
|
||||
|
121
values.yaml
121
values.yaml
@ -20,9 +20,19 @@ global:
|
||||
# hostnames:
|
||||
# - example.com
|
||||
|
||||
## @param replicaCount number of replicas for the statefulset
|
||||
## @param replicaCount number of replicas for the deployment
|
||||
replicaCount: 1
|
||||
|
||||
## @section strategy
|
||||
## @param strategy.type strategy type
|
||||
## @param strategy.rollingUpdate.maxSurge maxSurge
|
||||
## @param strategy.rollingUpdate.maxUnavailable maxUnavailable
|
||||
strategy:
|
||||
type: "RollingUpdate"
|
||||
rollingUpdate:
|
||||
maxSurge: "100%"
|
||||
maxUnavailable: 0
|
||||
|
||||
## @param clusterDomain cluster domain
|
||||
clusterDomain: cluster.local
|
||||
|
||||
@ -74,11 +84,16 @@ containerSecurityContext: {}
|
||||
## @param securityContext Run init and Gitea containers as a specific securityContext
|
||||
securityContext: {}
|
||||
|
||||
## @param podDisruptionBudget Pod disruption budget
|
||||
podDisruptionBudget: {}
|
||||
# maxUnavailable: 1
|
||||
# minAvailable: 1
|
||||
|
||||
## @section Service
|
||||
service:
|
||||
## @param service.http.type Kubernetes service type for web traffic
|
||||
## @param service.http.port Port number for web traffic
|
||||
## @param service.http.clusterIP ClusterIP setting for http autosetup for statefulset is None
|
||||
## @param service.http.clusterIP ClusterIP setting for http autosetup for deployment is None
|
||||
## @param service.http.loadBalancerIP LoadBalancer IP setting
|
||||
## @param service.http.nodePort NodePort for http service
|
||||
## @param service.http.externalTrafficPolicy If `service.http.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
|
||||
@ -101,7 +116,7 @@ service:
|
||||
annotations: {}
|
||||
## @param service.ssh.type Kubernetes service type for ssh traffic
|
||||
## @param service.ssh.port Port number for ssh traffic
|
||||
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for statefulset is None
|
||||
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for deployment is None
|
||||
## @param service.ssh.loadBalancerIP LoadBalancer IP setting
|
||||
## @param service.ssh.nodePort NodePort for ssh service
|
||||
## @param service.ssh.externalTrafficPolicy If `service.ssh.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
|
||||
@ -155,7 +170,7 @@ ingress:
|
||||
# If helm doesn't correctly detect your ingress API version you can set it here.
|
||||
# apiVersion: networking.k8s.io/v1
|
||||
|
||||
## @section StatefulSet
|
||||
## @section deployment
|
||||
#
|
||||
## @param resources Kubernetes resources
|
||||
resources:
|
||||
@ -177,26 +192,29 @@ resources:
|
||||
## @param schedulerName Use an alternate scheduler, e.g. "stork"
|
||||
schedulerName: ""
|
||||
|
||||
## @param nodeSelector NodeSelector for the statefulset
|
||||
## @param nodeSelector NodeSelector for the deployment
|
||||
nodeSelector: {}
|
||||
|
||||
## @param tolerations Tolerations for the statefulset
|
||||
## @param tolerations Tolerations for the deployment
|
||||
tolerations: []
|
||||
|
||||
## @param affinity Affinity for the statefulset
|
||||
## @param affinity Affinity for the deployment
|
||||
affinity: {}
|
||||
|
||||
## @param dnsConfig dnsConfig for the statefulset
|
||||
## @param topologySpreadConstraints TopologySpreadConstraints for the deployment
|
||||
topologySpreadConstraints: []
|
||||
|
||||
## @param dnsConfig dnsConfig for the deployment
|
||||
dnsConfig: {}
|
||||
|
||||
## @param priorityClassName priorityClassName for the statefulset
|
||||
## @param priorityClassName priorityClassName for the deployment
|
||||
priorityClassName: ""
|
||||
|
||||
## @param statefulset.env Additional environment variables to pass to containers
|
||||
## @param statefulset.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
|
||||
## @param statefulset.labels Labels for the statefulset
|
||||
## @param statefulset.annotations Annotations for the Gitea StatefulSet to be created
|
||||
statefulset:
|
||||
## @param deployment.env Additional environment variables to pass to containers
|
||||
## @param deployment.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
|
||||
## @param deployment.labels Labels for the deployment
|
||||
## @param deployment.annotations Annotations for the Gitea deployment to be created
|
||||
deployment:
|
||||
env:
|
||||
[]
|
||||
# - name: VARIABLE
|
||||
@ -218,14 +236,16 @@ serviceAccount:
|
||||
name: ""
|
||||
automountServiceAccountToken: false
|
||||
imagePullSecrets: []
|
||||
# - name: private-registry-access
|
||||
# - name: private-registry-access
|
||||
annotations: {}
|
||||
labels: {}
|
||||
|
||||
## @section Persistence
|
||||
#
|
||||
## @param persistence.enabled Enable persistent storage
|
||||
## @param persistence.existingClaim Use an existing claim to store repository information
|
||||
## @param persistence.create Whether to create the persistentVolumeClaim for shared storage
|
||||
## @param persistence.mount Whether the persistentVolumeClaim should be mounted (even if not created)
|
||||
## @param persistence.claimName Use an existing claim to store repository information
|
||||
## @param persistence.size Size for persistence to store repo information
|
||||
## @param persistence.accessModes AccessMode for persistence
|
||||
## @param persistence.labels Labels for the persistence volume claim to be created
|
||||
@ -234,7 +254,9 @@ serviceAccount:
|
||||
## @param persistence.subPath Subdirectory of the volume to mount at
|
||||
persistence:
|
||||
enabled: true
|
||||
existingClaim:
|
||||
create: true
|
||||
mount: true
|
||||
claimName: gitea-shared-storage
|
||||
size: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
@ -243,7 +265,7 @@ persistence:
|
||||
storageClass:
|
||||
subPath:
|
||||
|
||||
## @param extraVolumes Additional volumes to mount to the Gitea statefulset
|
||||
## @param extraVolumes Additional volumes to mount to the Gitea deployment
|
||||
extraVolumes: []
|
||||
# - name: postgres-ssl-vol
|
||||
# secret:
|
||||
@ -358,13 +380,14 @@ gitea:
|
||||
# customProfileUrl:
|
||||
# customEmailUrl:
|
||||
|
||||
## @param gitea.config Configuration for the Gitea server,ref: [config-cheat-sheet](https://docs.gitea.io/en-us/config-cheat-sheet/)
|
||||
config: {}
|
||||
# APP_NAME: "Gitea: Git with a cup of tea"
|
||||
# RUN_MODE: dev
|
||||
#
|
||||
# server:
|
||||
# SSH_PORT: 22
|
||||
## @param gitea.config.server.SSH_PORT SSH port for rootlful Gitea image
|
||||
## @param gitea.config.server.SSH_LISTEN_PORT SSH port for rootless Gitea image
|
||||
config:
|
||||
# APP_NAME: "Gitea: Git with a cup of tea"
|
||||
# RUN_MODE: dev
|
||||
server:
|
||||
SSH_PORT: 22 # rootful image
|
||||
SSH_LISTEN_PORT: 2222 # rootless image
|
||||
#
|
||||
# security:
|
||||
# PASSWORD_COMPLEXITY: spec
|
||||
@ -446,23 +469,37 @@ gitea:
|
||||
successThreshold: 1
|
||||
failureThreshold: 10
|
||||
|
||||
## @section Memcached
|
||||
#
|
||||
## @param memcached.enabled Memcached is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/memcached) if enabled in the values. Complete Configuration can be taken from their website.
|
||||
## ref: https://hub.docker.com/r/bitnami/memcached/tags/
|
||||
## @param memcached.service.ports.memcached Port for Memcached
|
||||
memcached:
|
||||
## @section redis-cluster
|
||||
## @param redis-cluster.enabled Enable redis
|
||||
## @param redis-cluster.global.redis.password Password for the "gitea" user (overrides `password`)
|
||||
redis-cluster:
|
||||
enabled: true
|
||||
# image:
|
||||
# registry: docker.io
|
||||
# repository: bitnami/memcached
|
||||
# tag: ""
|
||||
# digest: ""
|
||||
# pullPolicy: IfNotPresent
|
||||
# pullSecrets: []
|
||||
service:
|
||||
ports:
|
||||
memcached: 11211
|
||||
global:
|
||||
redis:
|
||||
password: gitea
|
||||
|
||||
## @section postgresql-ha
|
||||
#
|
||||
## @param postgresql-ha.enabled Enable postgresql-ha
|
||||
## @param postgresql-ha.global.postgresql-ha.auth.password Password for the `gitea` user (overrides `auth.password`)
|
||||
## @param postgresql-ha.global.postgresql-ha.auth.database Name for a custom database to create (overrides `auth.database`)
|
||||
## @param postgresql-ha.global.postgresql-ha.auth.username Name for a custom user to create (overrides `auth.username`)
|
||||
## @param postgresql-ha.global.postgresql-ha.service.ports.postgresql-ha postgresql-ha service port (overrides `service.ports.postgresql-ha`)
|
||||
## @param postgresql-ha.primary.persistence.size PVC Storage Request for postgresql-ha volume
|
||||
postgresql-ha:
|
||||
enabled: true
|
||||
global:
|
||||
postgresql-ha:
|
||||
auth:
|
||||
password: gitea
|
||||
database: gitea
|
||||
username: gitea
|
||||
service:
|
||||
ports:
|
||||
postgresql-ha: 5432
|
||||
primary:
|
||||
persistence:
|
||||
size: 10Gi
|
||||
|
||||
## @section PostgreSQL
|
||||
#
|
||||
@ -473,7 +510,7 @@ memcached:
|
||||
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
|
||||
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume
|
||||
postgresql:
|
||||
enabled: true
|
||||
enabled: false
|
||||
global:
|
||||
postgresql:
|
||||
auth:
|
||||
|
Loading…
x
Reference in New Issue
Block a user