Decrease default DB persistence size? #98
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I noticed the default persistence size for the DB is
10Gi
. This is significantly more than 90% of people will need I'd wager. As an example, I ran Gitea for 3 years and my database size is about 66 Megabytes. Why not ship with more reasonable defaults, like a2Gi
database and18Gi
for the Gitea PVC?That sounds reasonable. Would changing this affect previous persistent volume claims already created? If not then we would accept a PR to change the default, otherwise we may still consider accepting a breaking change.
I'll try running some tests with k3d and get back to you with a patch. Just finished migrating a site to this chart and am not yet deeply knowledgable on PVC adjust best practices in Helm or otherwise. I did read shortly after opening this it's possible to increase the size of a PV with a patch or similar to a PVC.
Here are the specs I used for code.habd.as:
persistence.size: 500Mi
postgresql.persistence.size: 10Gi
To me this seems the most reasonable and I'm happy to send in a pull once we determine what the impact of a reduction will be for those with existing PVs.
This will most likely be a breaking change, if users are using the default sizes, however i agree, that we should use more reasonable sizes for the PVCs.
It may not be a breaking change, as I just tested on a kind cluster and changing the PVC values didn't affect them as at least with the claims I made they weren't able to be modified afterwards.
Although that being said, my cluster is a dev cluster, and others likely have different configurations, hence my question of
Would changing this affect previous persistent volume claims already created?
. Hopefully we can get others to confirm if this would be breaking for them.As @techknowlogick mentioned it'd be useful to have others give input on how this would affect production runs.
Upon opening this my assumption is that a 10Gi resource claim will effectively limit the use of that space even if the volume never consumed all the space. 10Gi is quite a bit of space when in practice a database may never grow more than 500MB, possibly less.
Another thing to consider, will having a 10Gi claim cause each volume snapshot to also consume 10Gi or are those snapshots somehow trimmed?
Lastly, it is worth leaving some extra space to facilitate the
dump
of a database to disk for use in DB export same goes for imports, which may be copied to the PV viakubectl cp
for the purposes of a DB restore without having to jump through hoops due to space limitations.Hi. I don't think decreasing the claim size would be a breaking change, although I am not 100% sure but ~80% sure.
Kubernetes uses the specified
resources.requests.storage
value to determine on which node that PVC can be created without overcommitting the node. There is no default limitation on how much storage will be stored with that PVC. It even allows more data that requested. For this type of restriction Resource Quotas comes into action. Since not all storage types support them, this is not a default in Kubernetes. https://kubernetes.io/docs/concepts/policy/resource-quotas/This chart uses a StatefulSet and AFAIKS the database dependencies, too. So the PVC will be kept as long as it's not manually kicked. Means, if Kubernetes once decided to run Gitea or the database on a specific node, it will always run there due to the persistent data on the node (usually it's a local storage).
So a decrease of the PVC size would not affect Gitea and it's dependencies but maybe other applications running on the same node when there is a storage overcommitment which could lead to a disk pressure state.
I have no experience with Storage Snapshots so I cannot say something about that part.
Cheers
So from my experience, a change of the PV size would only matter for newly created PVCs. This only applies if the name of the PVC is changed or none exists at all.
So I'd think that changing the defaults would not break any existing deployments. Can be easily checked by just changing the value for an existing toy install and observing what happens.
Sounds like a reasonable request to me overall.
We have a medium-size instance running ourselves and I can check how much space we use atm to come up with a reasonable new sizing proposal.
@pat-s You are right. I forgot about that particular behavior.
Citing the Kubernetes docs:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
So it is possible to decrease our default value.
Since #437 introduces the creation of a PersistentVolumeClaim, we could solve this issue within that PR by decreasing the default size of the automatically created PVC and allow specifying an
persistence.existingClaimName
to reuse the old one. What's your opinion on this, @pat-s?Jup, sounds good to me.
BTW feel free to push to #437 directly if you have some idea(s) that were agreed on on. This way we can proceed faster :)