nginx ingress microk8s - Service "default/gitea-http" does not have any active Endpoint #253
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I have enabled ingress on microk8s with the command
microk8s enable ingress
.I see this message in the nginx-ingress-microk8s-controller pod log:
What that turns out to mean is that the ingress config in the values.yaml has to have:
Without the
className: public
the ingress is ignored:Even with this
className: public
setting, though, I get:Any ideas please?
Typically, Kubernetes ingresses also look for a specific annotation to distinguish between different parallel ingress. Have you tried adding such annotation?
https://stackoverflow.com/a/67041204
If this does not do the trick: The message "No active endpoint" could also mean that something within the traffic path (external request -> ingress -> service -> endpoint -> pod -> application) is incorrect. Inactive endpoints mostly indicate that the pod selectors set on the service do not match.
To install nginx such that it works with the
ingressClass=nginx
use:It occurred to me that the
gitea-http
service depends on thegitea-00
pod. Thegitea-0
pod takes some time to come up, as it is waiting on the postgresql service.I used the Helm chart without the ingress, i.e.,
enabled: false
. The services started, eventually, andcurl 127.0.0.1:3000
returned a web page.I then used
k create -f ingress.yaml
to apply the ingress, where the content of ingress.yaml is below. I now do not see the error messageService "default/gitea-http" does not have any active Endpoint.
This seems to imply that the ingress can only be applied when all of the services are active. This is an unsatisfactory conclusion if one assumes that it should be possible to apply the ingress in the Helm chart.
The ingress.yaml is a copy of the output from
helm --debug install --dependency-update gitea . -f values.yaml
withenabled: true
set for theingress
section in values.yaml.In general, all Kubernetes resources (such as Ingress, Deployment, Service) can be applied at any time in any order. If there is a dependency from one resource to another, Kubernetes will wait until it's available. Especially Ingress resources can be applied at any time. If the Ingress Controller (in your case NGINX, IIRC) is configured correctly, it should auto-update itself with new Ingress resources.
As you said, this is now working for you, right?. I'd like trying to reproduce your described behavior anyway. Please share:
1.15.4
. Right?)If you use the latest available version 4.1.1, do you still get this error?
Thank you for following up. This is not working properly at all, just inching forward. I am actually going to install a Git server on the VM itself, and move on now. This Gitea container is a small piece of a larger project, and it has swallowed up far too much time now sadly.
But I will be able to test any suggestions you make, as I do want to see how this should work.
I am using:
This Helm chart:
https://gitea.com/NathanDotTo/helm-chart/src/branch/master/values.yaml
Which is a fork I made last week.
So, actually, this should get to the same result:
In the end I did not change the Helm chart values.yaml, instead I applied the ingress separately, after the services and pods were up and running.
I installed Helm with:
I am running on a Ubuntu 21.10 server as a vSphere VM. I also have the build files for the Packer template and the Terraform for the VM itself, if that helps.
Please also note:
The storage is required for postgresq, and the DNS is required else the
gitea
service can't find thepostgresql
service.Thanks for the detailed description. Sorry for misunderstanding your previous comment. I will have a closer look on this the next days.
The master branch is not the same as the latest release, because it contains unreleased changes. But I think this doesn't really change the current situation.
Many thanks. I also configure
sudo iptables -P FORWARD ACCEPT
on the microk8s VM.Obviously, I was not able to have a closer look "the next days". Is this issue still valid, @NathanDotTo?
Stale and possibly outdated -> closing
Looks good thanks 👍👍