Plan to keep v8.x non-HA alive? #524

Closed
opened 2023-10-05 09:38:00 +00:00 by pi3ch · 6 comments
pi3ch commented 2023-10-05 09:38:00 +00:00 (Migrated from gitea.com)

I understand v9.x has moved to the high availability mode but this is an overkill for small deployment that don't need HA. I only want to keep thing up to date with latest version of gitea but don't want the HA or redis-cluster.

Is there a plan to keep v8.X version up to date with latest version of gitea?

I understand v9.x has moved to the high availability mode but this is an overkill for small deployment that don't need HA. I only want to keep thing up to date with latest version of gitea but don't want the HA or redis-cluster. Is there a plan to keep v8.X version up to date with latest version of gitea?
justusbunsi commented 2023-10-06 10:18:41 +00:00 (Migrated from gitea.com)

Hi @pi3ch. Thanks for asking.

@pat-s and I discussed this and again concluded in not supporting several versions in parallel. There was a similar question lately asking for a backport to v7 (#491). Instead, we are going to provide some examples for light-weight setups based on this Chart and encourage all users to keep up-to-date with the Chart releases.

I am really interested in following:

  • What is your current setup (assuming you're still on v8) and which components do you use?
  • What prevents you from upgrading to v9?
  • Do you have a homelab with not that much required resources?
  • Do you use any wrapper (e.g. ArgoCD, FluxCD, Terraform, ...)?

This is something we don't know about this Charts' users but would help planning future changes.

Happy for your feedback. 🙂

Hi @pi3ch. Thanks for asking. @pat-s and I discussed this and again concluded in not supporting several versions in parallel. There was a similar question lately asking for a backport to v7 (#491). Instead, we are going to provide some examples for light-weight setups based on this Chart and encourage all users to keep up-to-date with the Chart releases. I am really interested in following: - What is your current setup (assuming you're still on v8) and which components do you use? - What prevents you from upgrading to v9? - Do you have a homelab with not that much required resources? - Do you use any wrapper (e.g. ArgoCD, FluxCD, Terraform, ...)? This is something we don't know about this Charts' users but would help planning future changes. Happy for your feedback. 🙂
pi3ch commented 2023-10-07 00:46:08 +00:00 (Migrated from gitea.com)

Thanks @justusbunsi Yeah I understand, it may double up your work.

One of my main decision factor to migrate to gitea was the light-weight nature and simplicity. It was a definite win over gitlab or other complex setups. I think this the key feature of gitea.

I am on v8 and use Helm with no wrapper. I have enough resources and a good sized k8s cluster where I run gitea. HA is not gonna give us any more advantage and we haven't seen any degrade in performance (the actual blocker is CI/CD not git server). My main concern is adding more complexity to the setup that is perfectly working. It requires more time to maintain and more time and cost to troubleshoot.

I skim over v9 migration guide and at minimum I need to bring up a separate redis cluster and maintain it separately to get what we already have.

I think having a non-HA helm version would be beneficial to whom come to gitea for its light weight nature. Open to hear everyone else's thought.

Thanks @justusbunsi Yeah I understand, it may double up your work. One of my main decision factor to migrate to gitea was the light-weight nature and simplicity. It was a definite win over gitlab or other complex setups. I think this the key feature of gitea. I am on v8 and use Helm with no wrapper. I have enough resources and a good sized k8s cluster where I run gitea. HA is not gonna give us any more advantage and we haven't seen any degrade in performance (the actual blocker is CI/CD not git server). My main concern is adding more complexity to the setup that is perfectly working. It requires more time to maintain and more time and cost to troubleshoot. I skim over v9 migration guide and at minimum I need to bring up a separate redis cluster and maintain it separately to get what we already have. I think having a non-HA helm version would be beneficial to whom come to gitea for its light weight nature. Open to hear everyone else's thought.
pi3ch commented 2023-10-07 02:53:56 +00:00 (Migrated from gitea.com)

UPDATE: Just tried to upgrade to v9 and failed because it went over the volume that can be attached to a node. Redis cluster is set to have 6 pods, each require a PV that uses the majority volume mount limits on the managed k8s

UPDATE: Just tried to upgrade to v9 and failed because it went over the volume that can be attached to a node. Redis cluster is set to have 6 pods, each require a PV that uses the majority volume mount limits on [the managed k8s](https://docs.digitalocean.com/products/volumes/details/limits/)
pat-s commented 2023-10-07 06:56:34 +00:00 (Migrated from gitea.com)

@pi3ch You might be misunderstanding some points about HA and our idea in general.

  • Gitea is still lightweight and HA is a choice - you can still run Gitea with a single PG pod (or even sqlite if you want to) and without any redis instance
  • HA is not making things more "performant" it helps you with uptime and availability and running upgrades
  • You can stay with your previous setup without any issues by just disabling the new default components and linking your old ones

I skim over v9 migration guide and at minimum I need to bring up a separate redis cluster and maintain it separately to get what we already have.

Again, no redis cluster is needed. It's a choice. There are many providers for session and cache like "db", "memory" or external applications like the previous memcached.

I think having a non-HA helm version would be beneficial to whom come to gitea for its light weight nature. Open to hear everyone else's thought.

It's already there, it's just a question of setting the values accordingly. We already mentioned the "minimum dev setup" in the README. It's quite easy to extend from there - to add a proper PG DB for example.

@justusbunsi and me want to add another example setup which kind of mimics the old setup (i.e. a single PG + a "lightweight" memory and cache provider).

The reasons for doing all the changes in v9: setting up HA on your own is not trivial and having this as the default will allow users to bootstrap a solid HA Gitea on k8s. And k8s is a platform where many aim for HA-ready production workloads. It's less a homelab environment.
For these you might want to rather use host installs or docker-compose.
And yes, I know that projects like k3s and minikube exist - but again, k8s is a production environment by default and previously it was not possible to have a proper HA deployment of Gitea. This was solved by switching to deployments and forcing a RWX instead of a RWO.

@pi3ch You might be misunderstanding some points about HA and our idea in general. - Gitea is still lightweight and HA is a choice - you can still run Gitea with a single PG pod (or even sqlite if you want to) and without any redis instance - HA is not making things more "performant" it helps you with uptime and availability and running upgrades - You can stay with your previous setup without any issues by just disabling the new default components and linking your old ones > I skim over v9 migration guide and at minimum I need to bring up a separate redis cluster and maintain it separately to get what we already have. Again, no redis cluster **is needed**. It's a choice. There are many providers for `session` and `cache` like `"db"`, `"memory"` or external applications like the previous `memcached`. > I think having a non-HA helm version would be beneficial to whom come to gitea for its light weight nature. Open to hear everyone else's thought. It's already there, it's just a question of setting the values accordingly. We already mentioned the "minimum dev setup" in the README. It's quite easy to extend from there - to add a proper PG DB for example. @justusbunsi and me want to add another example setup which kind of mimics the old setup (i.e. a single PG + a "lightweight" memory and cache provider). The reasons for doing all the changes in v9: setting up HA on your own is not trivial and having this as the default will allow users to bootstrap a *solid* HA Gitea on k8s. And k8s is a platform where many aim for HA-ready production workloads. It's less a homelab environment. For these you might want to rather use host installs or `docker-compose`. And yes, I know that projects like `k3s` and `minikube` exist - but again, `k8s` is a production environment by default and previously it was not possible to have a **proper** HA deployment of Gitea. This was solved by switching to deployments and forcing a RWX instead of a RWO.
pi3ch commented 2023-10-07 09:52:20 +00:00 (Migrated from gitea.com)

The effort you are putting to create samples and guides, may be worth putting to maintain a non-HA version. It reduces error and easier to troubleshoot.

I've been using this helm chart for some years, it has been always a seamless upgrade, but the recent v7 change to v8 and now v8 to v9 was quite major and broke things. I am saying this because we don't use gitea for pet projects. we have many users and any major change is quite costly if not well tested.

I understand reason to move to HA but you should also look at the reality of your users' usecases. Have you asked to see if HA is a must to have feature for them? how many percentage of them really need it? This can guide you to focus on feature that are most in-demand. If you decide to keep non-HA version alive, I am happy to help.

On another note, managed k8s have hard limits on number of volume mounts. So it would be good to make it configurable. e.g. setting number of redis+cluster replicas.

I've reached to hard capacity of volumes that I can mount on our k8s nodes so I choose the path of setting up memcache. I am now on v9 and the following should be updated in the Upgrading guide:

queue.TYPE = "memcache"

Per gitea config, there is no memcache value and defaults to level. To get it working, It should be changed to

queue:
      TYPE: channel
      CONN_STR: memcache://gitea-memcached.<namespace>.svc.cluster.local:11211
The effort you are putting to create samples and guides, may be worth putting to maintain a non-HA version. It reduces error and easier to troubleshoot. I've been using this helm chart for some years, it has been always a seamless upgrade, but the recent v7 change to v8 and now v8 to v9 was quite major and broke things. I am saying this because we don't use gitea for pet projects. we have many users and any major change is quite costly if not well tested. I understand reason to move to HA but you should also look at the reality of your users' usecases. Have you asked to see if HA is a must to have feature for them? how many percentage of them really need it? This can guide you to focus on feature that are most in-demand. If you decide to keep non-HA version alive, I am happy to help. On another note, managed k8s have hard limits on number of volume mounts. So it would be good to make it configurable. e.g. setting number of redis+cluster replicas. I've reached to hard capacity of volumes that I can mount on our k8s nodes so I choose the path of setting up memcache. I am now on v9 and the following should be updated in the Upgrading guide: ``` queue.TYPE = "memcache" ``` Per gitea config, there is no memcache value and defaults to level. To get it working, It should be changed to ``` queue: TYPE: channel CONN_STR: memcache://gitea-memcached.<namespace>.svc.cluster.local:11211 ```
pat-s commented 2023-10-07 19:22:10 +00:00 (Migrated from gitea.com)

The effort you are putting to create samples and guides, may be worth putting to maintain a non-HA version. It reduces error and easier to troubleshoot.

I disagree. An example is a one-time effort, maintaining parallel versions is not. And also, again: there's no need for it. You can do everything with the current chart, it just takes a bit of effort to modify the values and configure things.
I don't really like that it seems most users don't really even try that and only want to go with the defaults - and are unhappy if these are not perfect for them.

I've been using this helm chart for some years, it has been always a seamless upgrade, but the recent v7 change to v8 and now v8 to v9 was quite major and broke things. I am saying this because we don't use gitea for pet projects. we have many users and any major change is quite costly if not well tested.

That's why there were major upgrades - they change/break things. You can stay on previous versions as long as you like and just change the image tag value. We discussed this internally with various people and it's not me who made this decision alone.
And we certainly don't want to force anything or limit the configuration options - which we also don't do. It just takes a bit of "effort" to change values.yml accordingly.

On another note, managed k8s have hard limits on number of volume mounts. So it would be good to make it configurable. e.g. setting number of redis+cluster replicas.

This is already available by just changing values below the redis-cluster section. This is normal helm behavior for dependencies.
Yes, the default is 6 (which is not set by us obviously) but setting it to 3 also works. But again: you don't need to use it. I outlined other options above: deploying a single-pod memcached or set the session and cache to "db" or "memory".

The limits of DO seem to be quite odd and while I don't know the reasons for that, it doesn't look like a good choice for k8s.

Per gitea config, there is no memcache value and defaults to level. To get it working, It should be changed to

You're right that this information is missing and should be added. Though probably it's best to add it to an example showing a "minima" config which is close to the old one of v7 - so that users can easily stay on this old config by just c/p a few settings in values.yml.

> The effort you are putting to create samples and guides, may be worth putting to maintain a non-HA version. It reduces error and easier to troubleshoot. I disagree. An example is a one-time effort, maintaining parallel versions is not. And also, again: there's no need for it. You can do everything with the current chart, it just takes a bit of effort to modify the values and configure things. I don't really like that it seems most users don't really even try that and only want to go with the defaults - and are unhappy if these are not perfect for them. > I've been using this helm chart for some years, it has been always a seamless upgrade, but the recent v7 change to v8 and now v8 to v9 was quite major and broke things. I am saying this because we don't use gitea for pet projects. we have many users and any major change is quite costly if not well tested. That's why there were major upgrades - they change/break things. You can stay on previous versions as long as you like and just change the image tag value. We discussed this internally with various people and it's not me who made this decision alone. And we certainly don't want to force anything or limit the configuration options - which we also don't do. It just takes a bit of "effort" to change `values.yml` accordingly. > On another note, managed k8s have hard limits on number of volume mounts. So it would be good to make it configurable. e.g. setting number of redis+cluster replicas. This is already available by just changing values below the `redis-cluster` section. This is normal helm behavior for dependencies. Yes, the default is 6 (which is not set by us obviously) but setting it to 3 also works. But again: you don't need to use it. I outlined other options above: deploying a single-pod `memcached` or set the `session` and `cache` to `"db"` or `"memory"`. The limits of DO seem to be quite odd and while I don't know the reasons for that, it doesn't look like a good choice for k8s. > Per gitea config, there is no memcache value and defaults to level. To get it working, It should be changed to You're right that this information is missing and should be added. Though probably it's best to add it to an example showing a "minima" config which is close to the old one of v7 - so that users can easily stay on this old config by just c/p a few settings in `values.yml`.
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: lunny/helm-chart#524
No description provided.