Plan to keep v8.x non-HA alive? #524
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I understand v9.x has moved to the high availability mode but this is an overkill for small deployment that don't need HA. I only want to keep thing up to date with latest version of gitea but don't want the HA or redis-cluster.
Is there a plan to keep v8.X version up to date with latest version of gitea?
Hi @pi3ch. Thanks for asking.
@pat-s and I discussed this and again concluded in not supporting several versions in parallel. There was a similar question lately asking for a backport to v7 (#491). Instead, we are going to provide some examples for light-weight setups based on this Chart and encourage all users to keep up-to-date with the Chart releases.
I am really interested in following:
This is something we don't know about this Charts' users but would help planning future changes.
Happy for your feedback. 🙂
Thanks @justusbunsi Yeah I understand, it may double up your work.
One of my main decision factor to migrate to gitea was the light-weight nature and simplicity. It was a definite win over gitlab or other complex setups. I think this the key feature of gitea.
I am on v8 and use Helm with no wrapper. I have enough resources and a good sized k8s cluster where I run gitea. HA is not gonna give us any more advantage and we haven't seen any degrade in performance (the actual blocker is CI/CD not git server). My main concern is adding more complexity to the setup that is perfectly working. It requires more time to maintain and more time and cost to troubleshoot.
I skim over v9 migration guide and at minimum I need to bring up a separate redis cluster and maintain it separately to get what we already have.
I think having a non-HA helm version would be beneficial to whom come to gitea for its light weight nature. Open to hear everyone else's thought.
UPDATE: Just tried to upgrade to v9 and failed because it went over the volume that can be attached to a node. Redis cluster is set to have 6 pods, each require a PV that uses the majority volume mount limits on the managed k8s
@pi3ch You might be misunderstanding some points about HA and our idea in general.
Again, no redis cluster is needed. It's a choice. There are many providers for
session
andcache
like"db"
,"memory"
or external applications like the previousmemcached
.It's already there, it's just a question of setting the values accordingly. We already mentioned the "minimum dev setup" in the README. It's quite easy to extend from there - to add a proper PG DB for example.
@justusbunsi and me want to add another example setup which kind of mimics the old setup (i.e. a single PG + a "lightweight" memory and cache provider).
The reasons for doing all the changes in v9: setting up HA on your own is not trivial and having this as the default will allow users to bootstrap a solid HA Gitea on k8s. And k8s is a platform where many aim for HA-ready production workloads. It's less a homelab environment.
For these you might want to rather use host installs or
docker-compose
.And yes, I know that projects like
k3s
andminikube
exist - but again,k8s
is a production environment by default and previously it was not possible to have a proper HA deployment of Gitea. This was solved by switching to deployments and forcing a RWX instead of a RWO.The effort you are putting to create samples and guides, may be worth putting to maintain a non-HA version. It reduces error and easier to troubleshoot.
I've been using this helm chart for some years, it has been always a seamless upgrade, but the recent v7 change to v8 and now v8 to v9 was quite major and broke things. I am saying this because we don't use gitea for pet projects. we have many users and any major change is quite costly if not well tested.
I understand reason to move to HA but you should also look at the reality of your users' usecases. Have you asked to see if HA is a must to have feature for them? how many percentage of them really need it? This can guide you to focus on feature that are most in-demand. If you decide to keep non-HA version alive, I am happy to help.
On another note, managed k8s have hard limits on number of volume mounts. So it would be good to make it configurable. e.g. setting number of redis+cluster replicas.
I've reached to hard capacity of volumes that I can mount on our k8s nodes so I choose the path of setting up memcache. I am now on v9 and the following should be updated in the Upgrading guide:
Per gitea config, there is no memcache value and defaults to level. To get it working, It should be changed to
I disagree. An example is a one-time effort, maintaining parallel versions is not. And also, again: there's no need for it. You can do everything with the current chart, it just takes a bit of effort to modify the values and configure things.
I don't really like that it seems most users don't really even try that and only want to go with the defaults - and are unhappy if these are not perfect for them.
That's why there were major upgrades - they change/break things. You can stay on previous versions as long as you like and just change the image tag value. We discussed this internally with various people and it's not me who made this decision alone.
And we certainly don't want to force anything or limit the configuration options - which we also don't do. It just takes a bit of "effort" to change
values.yml
accordingly.This is already available by just changing values below the
redis-cluster
section. This is normal helm behavior for dependencies.Yes, the default is 6 (which is not set by us obviously) but setting it to 3 also works. But again: you don't need to use it. I outlined other options above: deploying a single-pod
memcached
or set thesession
andcache
to"db"
or"memory"
.The limits of DO seem to be quite odd and while I don't know the reasons for that, it doesn't look like a good choice for k8s.
You're right that this information is missing and should be added. Though probably it's best to add it to an example showing a "minima" config which is close to the old one of v7 - so that users can easily stay on this old config by just c/p a few settings in
values.yml
.