Unable to use NFS persistent volumes #378
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
As the title suggests, I can't get this chart to work with NFS PVs, I believe due to permissions issues. The relevant parts of my values configuration and my pv-pvc configurations are attached.
I have tried on the nfs host calling
chown 1000:1000 /path/to/app
andchmod 0777 /path/to/app
and it didn't help.To troubleshoot, I tried making a version of the gitea pod that skips the init containers and runs
sleep 1000
for the entrypoint so that it won't fail. Then I get into a shellkubectl exec -it this-test-pod -- /bin/bash
and try manually calling the steps of init_directory_structure.sh.It seems the chown command isn't possible, but given that I set the ownership correctly in the nfs host, all the other commands work correctly. Is there a correct/easy way to configure gitea to not attempt to chown?
PS: I couldn't get postgresql to work for similar reasons, but mariadb managed just fine once I set
chown 1001:1001 /path/to/db
on the nfs host.Judging from my NFS experience over the years, it really depends on which NFS service you use, whether you use access points or not, whether you use static or dynamic provisioning.
I don't see where the chart would apply some restrictions here, often you just need to find a flexible approach to the file permissions and UID/GID mappings that complements the application.
However, I never tried NFS for Gitea here. Maybe it's worthwhile to share your motivation behind it? Note that Gitea is not yet HA capable without major modifications of the chart (see other issues about this topic).
I don't understand what you mean by finding a flexible approach to the UID/GID mappings. I can manually set ownership and permissions on the nfs host such that the gitea pod is able to read and write to it. It just can't
chown
.The motivation is just that I have a bare metal k8s cluster with a BSD server on another local network that has a ton of disks. So it's just to access that storage without installing something fancy, not to have HA.
Most NFS services behaves different (EFS on AWS, NFS on Azure, "stock" NFS on a Linux VM) WRT to permissions and UID/GID mappings.
I don't have experience with bare metal k8s installations or BSD servers.
NFS is known to have higher latency than RWO solutions. Maybe using a RWO storage class would already help?
Both of the above depend on your PV/SC permissions AFAIK - which look to be too restrictive?
Why shouldn't Gitea attempt to chown?
We have NFS working fine with Gitea on bare metal.
Using this provisioner: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
Have you already tested using the
persistence.subPath
setting? In pair with the provisioner @mattrpav suggested, this should fix your permission issues.Looks like this is stale or solved. Let's close here, we can re-open if needed.