WIP: Implementing Network Policy From PR 207 #306
Reference in New Issue
Block a user
No description provided.
Delete Branch "netpol"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
From PR #207
Hi All!
I have created network policy yaml file and adjusted helpers.tpl file in order to stop gitea pods from communicating outside of gitea pods. What I have is really basic as I am not a pro with helm charts. Maybe there is another way of doing it better but this is what I have. What I did was to add below to _helpers.tpl file:
I have added this file to pull the unique label that Gitea creates on all pods. Then I created the networkpolicy.yaml file and used the above label under matchLabels: in the networkpolicy.yaml file
I have tested this with a new deployment and everything seemed working fine. However not sure if it will be a breaking change with existing deployments, I have not tested that.
@justusbunsi this is ready to be test.
Please add a new section about the possible networkpolicy settings inside README.md.
@ -121,1 +65,4 @@
Network Policy labels
*/}}
{{- define "gitea.netpolLabels" -}}
app.kubernetes.io/instance: {{ .Release.Name }}
Only selecting the instance will also find pods of
memcached
deployment which could create unexpected errors if the policies are too strict. I'd suggest using the existinggitea.selectorLabels
helper function instead of defining a new one. The existing one combinesapp.kubernetes.io/instance
label withapp.kubernetes.io/name
label and therefore only will find the pod(s) created by the Gitea Statefulset.@ -0,0 +6,4 @@
spec:
podSelector:
matchLabels:
{{- include "gitea.netpolLabels" . | nindent 6 }}
As suggested above:
@ -0,0 +11,4 @@
- from:
- podSelector:
matchLabels:
{{- include "gitea.netpolLabels" . | nindent 10 }}
As suggested above:
@ -0,0 +13,4 @@
matchLabels:
{{- include "gitea.netpolLabels" . | nindent 10 }}
- ipBlock:
cidr: {{ .Values.networkpolicy.cidr }}
See next comment
@ -391,3 +159,1 @@
nodeSelector: {}
tolerations: []
affinity: {}
cidr: 10.0.0.0/8
This
cidr
value will most likely differ for every Kubernetes cluster which makes a default value not useful here, IMO.To indicate that this setting has to be defined when
.Values.networkpolicy.enabled
istrue
, you could add a required condition when using the value, which breaks the helm install if the value is not specified.My suggestion is to make that cidr line a comment. It would align with other conditional settings of other configuration sections.
ipBlock/cidr is supposed to be used only from matching external ip
[from https://kubernetes.io/docs/concepts/services-networking/network-policies/]
ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
this PR will break ingress/load balancer usage (as it disallow traffic the ingress controller to gitea pods)
should gitea pods even be talking between themselves ?
To me this PR is still considered WIP. There are open things.
This looks stale meanwhile. I'd vote to close if nobody wants to actively tackle this (again) in the near future?
Feel free to re-open if desired!
Pull request closed