Kubernetes iptables changes on service creation
This is simple investigation on what happens with iptables on service creation with /and without targetport specified
I've been recently told by someone that using target port in service creates mess in iptables so I thought it will be cool challenge to check if thats really the case.
I have used simple iptables-save to get whole iptables rules at once and then added service with targetport, then cleaned iptables by deleting service and then did the same without targetport to compare what is being added to ipt.
Cluster specs:
CPU ARCH: (x86) build with kubeadm:
CNI: Calico
KubeProxy mode: iptables
standard etcd
IPtables with targetport in service
> :KUBE-SEP-KVGH6HHOFLBGG2WW - [0:0] 184a186 > :KUBE-SVC-FOI3G5ZK27IESILB - [0:0] 201a204,205 > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix-service" -m tcp --dport 31224 -j KUBE-MARK-MASQ > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix-service" -m tcp --dport 31224 -j KUBE-SVC-FOI3G5ZK27IESILB > -A KUBE-SEP-KVGH6HHOFLBGG2WW -s 10.1.167.92/32 -m comment --comment "default/ngnix-service" -j KUBE-MARK-MASQ > -A KUBE-SEP-KVGH6HHOFLBGG2WW -p tcp -m comment --comment "default/ngnix-service" -m tcp -j DNAT --to-destination 10.1.167.92:8000 > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.105.57.223/32 -p tcp -m comment --comment "default/ngnix-service cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.105.57.223/32 -p tcp -m comment --comment "default/ngnix-service cluster IP" -m tcp --dport 80 -j KUBE-SVC-FOI3G5ZK27IESILB > -A KUBE-SVC-FOI3G5ZK27IESILB -m comment --comment "default/ngnix-service" -j KUBE-SEP-KVGH6HHOFLBGG2WW
As we can see in the example above the rule is to destination nat pod IP 10.1.167.92 on the port 8000 which is target port we have specified.
IPtables without targetport in service
> :KUBE-SEP-OP54BO3C6MKRBI5R - [0:0] > :KUBE-SVC-FOI3G5ZK27IESILB - [0:0] > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix-service" -m tcp --dport 32681 -j KUBE-MARK-MASQ > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix-service" -m tcp --dport 32681 -j KUBE-SVC-FOI3G5ZK27IESILB > -A KUBE-SEP-OP54BO3C6MKRBI5R -s 10.1.167.92/32 -m comment --comment "default/ngnix-service" -j KUBE-MARK-MASQ > -A KUBE-SEP-OP54BO3C6MKRBI5R -p tcp -m comment --comment "default/ngnix-service" -m tcp -j DNAT --to-destination 10.1.167.92:80 < -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ < -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y < -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ < -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.129.116/32 -p tcp -m comment --comment "default/ngnix-service cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.96.129.116/32 -p tcp -m comment --comment "default/ngnix-service cluster IP" -m tcp --dport 80 -j KUBE-SVC-FOI3G5ZK27IESILB > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP > -A KUBE-SVC-FOI3G5ZK27IESILB -m comment --comment "default/ngnix-service" -j KUBE-SEP-OP54BO3C6MKRBI5R
In such simple setup I would say not providing the targetport makes even bigger mess.
Lets see something more sophisticated so deployment with 2 replicasets
With targetport:
> :PREROUTING ACCEPT [4:212] > :INPUT ACCEPT [4:212] > :OUTPUT ACCEPT [29:1740] > :POSTROUTING ACCEPT [29:1740] > :KUBE-SEP-KCPMBF3JPX5ITGQR - [0:0] > :KUBE-SEP-PPG4JXRVDYEFVT6U - [0:0] > :KUBE-SVC-JSEMNMAXFXXWPYZQ - [0:0] > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix2-service" -m tcp --dport 30329 -j KUBE-MARK-MASQ > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix2-service" -m tcp --dport 30329 -j KUBE-SVC-JSEMNMAXFXXWPYZQ > -A KUBE-SEP-KCPMBF3JPX5ITGQR -s 10.1.129.5/32 -m comment --comment "default/ngnix2-service" -j KUBE-MARK-MASQ > -A KUBE-SEP-KCPMBF3JPX5ITGQR -p tcp -m comment --comment "default/ngnix2-service" -m tcp -j DNAT --to-destination 10.1.129.5:8000 > -A KUBE-SEP-PPG4JXRVDYEFVT6U -s 10.1.167.83/32 -m comment --comment "default/ngnix2-service" -j KUBE-MARK-MASQ > -A KUBE-SEP-PPG4JXRVDYEFVT6U -p tcp -m comment --comment "default/ngnix2-service" -m tcp -j DNAT --to-destination 10.1.167.83:8000 > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.98.111.212/32 -p tcp -m comment --comment "default/ngnix2-service cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.98.111.212/32 -p tcp -m comment --comment "default/ngnix2-service cluster IP" -m tcp --dport 80 -j KUBE-SVC-JSEMNMAXFXXWPYZQ > -A KUBE-SVC-JSEMNMAXFXXWPYZQ -m comment --comment "default/ngnix2-service" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-KCPMBF3JPX5ITGQR > -A KUBE-SVC-JSEMNMAXFXXWPYZQ -m comment --comment "default/ngnix2-service" -j KUBE-SEP-PPG4JXRVDYEFVT6U
From the following we can spot that each for our label selector in the service is listed here, which absolutely makes sense how kube proxy would know where to send packets if not that.
Ok lets try with replicas without providing the port:
Without targetport:
> :PREROUTING ACCEPT [4:252] > :INPUT ACCEPT [4:252] > :OUTPUT ACCEPT [25:1500] > :POSTROUTING ACCEPT [25:1500] > :KUBE-SEP-NK6MJN7AMVFQPBDQ - [0:0] > :KUBE-SEP-ZX65TQ3QUDHUAQQM - [0:0] > :KUBE-SVC-JSEMNMAXFXXWPYZQ - [0:0] > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix2-service" -m tcp --dport 31277 -j KUBE-MARK-MASQ > -A KUBE-NODEPORTS -p tcp -m comment --comment "default/ngnix2-service" -m tcp --dport 31277 -j KUBE-SVC-JSEMNMAXFXXWPYZQ > -A KUBE-SEP-NK6MJN7AMVFQPBDQ -s 10.1.129.5/32 -m comment --comment "default/ngnix2-service" -j KUBE-MARK-MASQ > -A KUBE-SEP-NK6MJN7AMVFQPBDQ -p tcp -m comment --comment "default/ngnix2-service" -m tcp -j DNAT --to-destination 10.1.129.5:80 < -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ < -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU --- > -A KUBE-SEP-ZX65TQ3QUDHUAQQM -s 10.1.167.83/32 -m comment --comment "default/ngnix2-service" -j KUBE-MARK-MASQ > -A KUBE-SEP-ZX65TQ3QUDHUAQQM -p tcp -m comment --comment "default/ngnix2-service" -m tcp -j DNAT --to-destination 10.1.167.83:80 > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.108.13.83/32 -p tcp -m comment --comment "default/ngnix2-service cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.108.13.83/32 -p tcp -m comment --comment "default/ngnix2-service cluster IP" -m tcp --dport 80 -j KUBE-SVC-JSEMNMAXFXXWPYZQ > -A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ > -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU > -A KUBE-SVC-JSEMNMAXFXXWPYZQ -m comment --comment "default/ngnix2-service" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-NK6MJN7AMVFQPBDQ > -A KUBE-SVC-JSEMNMAXFXXWPYZQ -m comment --comment "default/ngnix2-service" -j KUBE-SEP-ZX65TQ3QUDHUAQQM
This needs proper investigation from me but for now what I can see by not providing target port the iptables rules are interfering with more components like kube-dns and by providing the targetport its not touching kube-dns
To be continued...
Kubernetes etcd - what's inside?
What's etcd and what part does it play in kubernetes?
ETCD is an open source distributed key-value store. In kubernetes it is a "single point of truth" as well as "single point of failure", it is the "definition of the cluster" as it holds it's configuration and status so it's best to have it replicated.
There are of course alternatives like consul , zookeeper , doozerd , but I can't tell anything about them yet as haven't tried them (there are a lot of comparisons on the web already).
I was always curious how it's structured, also it's good to know how it works from admin point of view this could help to simulate some etcd failure/corruption/snapshot and restore.
To play with your etcd (remember you can destroy your cluster so just bear in mind to make read operations only and dont play with production one unless you know what you are doing).
We can do it using etcdclient locally (described in the bottom ) or eve easier - through etcd pod.
Accessing etcd through etcd pod
kubectl get pods -n kube-system -l component=etcd NAME READY STATUS RESTARTS AGE etcd-lenovo-node1 1/1 Running 0 47d
I now know my etcd pods name is "etcd-lenovo-node1" so I can execute etcdctl on it now.
Checking etcd instances
kubectl exec -it etcd-lenovo-node1 -n kube-system -- /bin/sh -c "ETCDCTL_API=3 etcdctl \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ member list"
For the reference we will get all keys:
kubectl exec -it etcd-lenovo-node1 -n kube-system -- /bin/sh -c "ETCDCTL_API=3 etcdctl \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ get / --prefix --keys-only"
Output:
/calico/ipam/v2/assignment/ipv4/block/10.1.129.0-26 /calico/ipam/v2/assignment/ipv4/block/10.1.161.128-26 /calico/ipam/v2/assignment/ipv4/block/10.1.167.64-26 /calico/ipam/v2/assignment/ipv4/block/10.1.46.192-26 /calico/ipam/v2/handle/ipip-tunnel-addr-lenovo-master /calico/ipam/v2/handle/ipip-tunnel-addr-lenovo-node1 /calico/ipam/v2/handle/ipip-tunnel-addr-lenovo-node2 /calico/ipam/v2/handle/ipip-tunnel-addr-lenovo-node3 /calico/ipam/v2/handle/k8s-pod-network.066670f1f4cbac2f254078fa523c2351ff43d43f37ef279d077ca9e537363367 /calico/ipam/v2/handle/k8s-pod-network.0854af62bdae13df75426ab9f0930045e527203c6a70863a1e6ac419dde92755 /calico/ipam/v2/handle/k8s-pod-network.2ea6c7186cce92fcf37c68cba1013975f8144a36c24580133cdae1d2a5c81824 /calico/ipam/v2/handle/k8s-pod-network.7b253e73a58302b7bd365f748085eddb7a88b73770f09981e4edcf743fca103e /calico/ipam/v2/handle/k8s-pod-network.7d44b409803e0297e38af4571fda00f7f50858fcd6b51556f94ac08561f41415 /calico/ipam/v2/handle/k8s-pod-network.81ee68186cf6390521b6f7211804959de7dd61526b0fa50a62be68bdcdff3348 /calico/ipam/v2/handle/k8s-pod-network.81f75b846e0b01c2756c6de53cb5ede58dcd2f08cfc0fb82b44dbbc41cb3cd83 /calico/ipam/v2/handle/k8s-pod-network.84122c67c08645bcdc8e05024086caa16ec841018b01ed7f15bd29d837653d7f /calico/ipam/v2/handle/k8s-pod-network.c5942bdcf48ee4971952d449db73e4130aa7f57719d48159611ba1591f2aa5e8 /calico/ipam/v2/handle/k8s-pod-network.de3addbfdb7c67b750b074a04d82753a3184d963255c349ed15a63597a6e7dd6 /calico/ipam/v2/host/lenovo-master/ipv4/block/10.1.46.192-26 /calico/ipam/v2/host/lenovo-node1/ipv4/block/10.1.161.128-26 /calico/ipam/v2/host/lenovo-node2/ipv4/block/10.1.167.64-26 /calico/ipam/v2/host/lenovo-node3/ipv4/block/10.1.129.0-26 /calico/resources/v3/projectcalico.org/clusterinformations/default /calico/resources/v3/projectcalico.org/felixconfigurations/default /calico/resources/v3/projectcalico.org/felixconfigurations/node.lenovo-master /calico/resources/v3/projectcalico.org/felixconfigurations/node.lenovo-node1 /calico/resources/v3/projectcalico.org/felixconfigurations/node.lenovo-node2 /calico/resources/v3/projectcalico.org/felixconfigurations/node.lenovo-node3 /calico/resources/v3/projectcalico.org/ippools/default-ipv4-ippool /calico/resources/v3/projectcalico.org/kubecontrollersconfigurations/default /calico/resources/v3/projectcalico.org/nodes/lenovo-master /calico/resources/v3/projectcalico.org/nodes/lenovo-node1 /calico/resources/v3/projectcalico.org/nodes/lenovo-node2 /calico/resources/v3/projectcalico.org/nodes/lenovo-node3 /calico/resources/v3/projectcalico.org/profiles/kns.default /calico/resources/v3/projectcalico.org/profiles/kns.kube-node-lease /calico/resources/v3/projectcalico.org/profiles/kns.kube-public /calico/resources/v3/projectcalico.org/profiles/kns.kube-system /calico/resources/v3/projectcalico.org/profiles/kns.metallb-system /calico/resources/v3/projectcalico.org/profiles/kns.quota-mem-cpu /calico/resources/v3/projectcalico.org/profiles/ksa.default.default /calico/resources/v3/projectcalico.org/profiles/ksa.kube-node-lease.default /calico/resources/v3/projectcalico.org/profiles/ksa.kube-public.default /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.attachdetach-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.bootstrap-signer /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.calico-kube-controllers /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.calico-node /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.certificate-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.clusterrole-aggregation-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.coredns /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.cronjob-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.daemon-set-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.default /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.deployment-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.disruption-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.endpoint-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.endpointslice-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.endpointslicemirroring-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.expand-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.generic-garbage-collector /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.horizontal-pod-autoscaler /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.job-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.kube-proxy /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.metrics-server /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.namespace-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.node-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.persistent-volume-binder /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pod-garbage-collector /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pv-protection-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pvc-protection-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.replicaset-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.replication-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.resourcequota-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.root-ca-cert-publisher /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.service-account-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.service-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.statefulset-controller /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.token-cleaner /calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.ttl-controller /calico/resources/v3/projectcalico.org/profiles/ksa.metallb-system.controller /calico/resources/v3/projectcalico.org/profiles/ksa.metallb-system.default /calico/resources/v3/projectcalico.org/profiles/ksa.metallb-system.speaker /calico/resources/v3/projectcalico.org/workloadendpoints/default/lenovo--master-k8s-nginx--hpa--6c4758554f--99h7l-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/default/lenovo--master-k8s-nginx--hpa--6c4758554f--tqrp9-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/default/lenovo--master-k8s-nginx--hpa--6c4758554f--zf4rd-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/default/lenovo--node2-k8s-ng-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/default/lenovo--node2-k8s-nginx--b4c9f744d--6fqjs-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/default/lenovo--node2-k8s-nginx--b4c9f744d--hvdsh-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/kube-system/lenovo--master-k8s-metrics--server--666b5bc478--8624s-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/kube-system/lenovo--node1-k8s-coredns--74ff55c5b--n942q-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/kube-system/lenovo--node1-k8s-coredns--74ff55c5b--vnm7t-eth0 /calico/resources/v3/projectcalico.org/workloadendpoints/metallb-system/lenovo--node3-k8s-controller--65db86ddc6--q6zvx-eth0 /registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.admissionregistration.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.apiextensions.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.apps /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.autoscaling /registry/apiregistration.k8s.io/apiservices/v1.batch /registry/apiregistration.k8s.io/apiservices/v1.certificates.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.events.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.node.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.batch /registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.coordination.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.discovery.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.extensions /registry/apiregistration.k8s.io/apiservices/v1beta1.flowcontrol.apiserver.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.node.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.policy /registry/apiregistration.k8s.io/apiservices/v1beta1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.scheduling.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v2beta1.autoscaling /registry/apiregistration.k8s.io/apiservices/v2beta2.autoscaling /registry/clusterrolebindings/calico-kube-controllers /registry/clusterrolebindings/calico-node /registry/clusterrolebindings/cluster-admin /registry/clusterrolebindings/kubeadm:get-nodes /registry/clusterrolebindings/kubeadm:kubelet-bootstrap /registry/clusterrolebindings/kubeadm:node-autoapprove-bootstrap /registry/clusterrolebindings/kubeadm:node-autoapprove-certificate-rotation /registry/clusterrolebindings/kubeadm:node-proxier /registry/clusterrolebindings/metallb-system:controller /registry/clusterrolebindings/metallb-system:speaker /registry/clusterrolebindings/metrics-server:system:auth-delegator /registry/clusterrolebindings/system:basic-user /registry/clusterrolebindings/system:controller:attachdetach-controller /registry/clusterrolebindings/system:controller:certificate-controller /registry/clusterrolebindings/system:controller:clusterrole-aggregation-controller /registry/clusterrolebindings/system:controller:cronjob-controller /registry/clusterrolebindings/system:controller:daemon-set-controller /registry/clusterrolebindings/system:controller:deployment-controller /registry/clusterrolebindings/system:controller:disruption-controller /registry/clusterrolebindings/system:controller:endpoint-controller /registry/clusterrolebindings/system:controller:endpointslice-controller /registry/clusterrolebindings/system:controller:endpointslicemirroring-controller /registry/clusterrolebindings/system:controller:expand-controller /registry/clusterrolebindings/system:controller:generic-garbage-collector /registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler /registry/clusterrolebindings/system:controller:job-controller /registry/clusterrolebindings/system:controller:namespace-controller /registry/clusterrolebindings/system:controller:node-controller /registry/clusterrolebindings/system:controller:persistent-volume-binder /registry/clusterrolebindings/system:controller:pod-garbage-collector /registry/clusterrolebindings/system:controller:pv-protection-controller /registry/clusterrolebindings/system:controller:pvc-protection-controller /registry/clusterrolebindings/system:controller:replicaset-controller /registry/clusterrolebindings/system:controller:replication-controller /registry/clusterrolebindings/system:controller:resourcequota-controller /registry/clusterrolebindings/system:controller:root-ca-cert-publisher /registry/clusterrolebindings/system:controller:route-controller /registry/clusterrolebindings/system:controller:service-account-controller /registry/clusterrolebindings/system:controller:service-controller /registry/clusterrolebindings/system:controller:statefulset-controller /registry/clusterrolebindings/system:controller:ttl-controller /registry/clusterrolebindings/system:coredns /registry/clusterrolebindings/system:discovery /registry/clusterrolebindings/system:kube-controller-manager /registry/clusterrolebindings/system:kube-dns /registry/clusterrolebindings/system:kube-scheduler /registry/clusterrolebindings/system:metrics-server /registry/clusterrolebindings/system:monitoring /registry/clusterrolebindings/system:node /registry/clusterrolebindings/system:node-proxier /registry/clusterrolebindings/system:public-info-viewer /registry/clusterrolebindings/system:service-account-issuer-discovery /registry/clusterrolebindings/system:volume-scheduler /registry/clusterroles/admin /registry/clusterroles/calico-kube-controllers /registry/clusterroles/calico-node /registry/clusterroles/cluster-admin /registry/clusterroles/edit /registry/clusterroles/kubeadm:get-nodes /registry/clusterroles/metallb-system:controller /registry/clusterroles/metallb-system:speaker /registry/clusterroles/system:aggregate-to-admin /registry/clusterroles/system:aggregate-to-edit /registry/clusterroles/system:aggregate-to-view /registry/clusterroles/system:aggregated-metrics-reader /registry/clusterroles/system:auth-delegator /registry/clusterroles/system:basic-user /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient /registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver /registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver /registry/clusterroles/system:certificates.k8s.io:kubelet-serving-approver /registry/clusterroles/system:certificates.k8s.io:legacy-unknown-approver /registry/clusterroles/system:controller:attachdetach-controller /registry/clusterroles/system:controller:certificate-controller /registry/clusterroles/system:controller:clusterrole-aggregation-controller /registry/clusterroles/system:controller:cronjob-controller /registry/clusterroles/system:controller:daemon-set-controller /registry/clusterroles/system:controller:deployment-controller /registry/clusterroles/system:controller:disruption-controller /registry/clusterroles/system:controller:endpoint-controller /registry/clusterroles/system:controller:endpointslice-controller /registry/clusterroles/system:controller:endpointslicemirroring-controller /registry/clusterroles/system:controller:expand-controller /registry/clusterroles/system:controller:generic-garbage-collector /registry/clusterroles/system:controller:horizontal-pod-autoscaler /registry/clusterroles/system:controller:job-controller /registry/clusterroles/system:controller:namespace-controller /registry/clusterroles/system:controller:node-controller /registry/clusterroles/system:controller:persistent-volume-binder /registry/clusterroles/system:controller:pod-garbage-collector /registry/clusterroles/system:controller:pv-protection-controller /registry/clusterroles/system:controller:pvc-protection-controller /registry/clusterroles/system:controller:replicaset-controller /registry/clusterroles/system:controller:replication-controller /registry/clusterroles/system:controller:resourcequota-controller /registry/clusterroles/system:controller:root-ca-cert-publisher /registry/clusterroles/system:controller:route-controller /registry/clusterroles/system:controller:service-account-controller /registry/clusterroles/system:controller:service-controller /registry/clusterroles/system:controller:statefulset-controller /registry/clusterroles/system:controller:ttl-controller /registry/clusterroles/system:coredns /registry/clusterroles/system:discovery /registry/clusterroles/system:heapster /registry/clusterroles/system:kube-aggregator /registry/clusterroles/system:kube-controller-manager /registry/clusterroles/system:kube-dns /registry/clusterroles/system:kube-scheduler /registry/clusterroles/system:kubelet-api-admin /registry/clusterroles/system:metrics-server /registry/clusterroles/system:monitoring /registry/clusterroles/system:node /registry/clusterroles/system:node-bootstrapper /registry/clusterroles/system:node-problem-detector /registry/clusterroles/system:node-proxier /registry/clusterroles/system:persistent-volume-provisioner /registry/clusterroles/system:public-info-viewer /registry/clusterroles/system:service-account-issuer-discovery /registry/clusterroles/system:volume-scheduler /registry/clusterroles/view /registry/configmaps/default/cfindex /registry/configmaps/default/kube-root-ca.crt /registry/configmaps/kube-node-lease/kube-root-ca.crt /registry/configmaps/kube-public/cluster-info /registry/configmaps/kube-public/kube-root-ca.crt /registry/configmaps/kube-system/calico-config /registry/configmaps/kube-system/coredns /registry/configmaps/kube-system/extension-apiserver-authentication /registry/configmaps/kube-system/kube-proxy /registry/configmaps/kube-system/kube-root-ca.crt /registry/configmaps/kube-system/kubeadm-config /registry/configmaps/kube-system/kubelet-config-1.20 /registry/configmaps/metallb-system/kube-root-ca.crt /registry/controllerrevisions/kube-system/calico-node-849b8dc6bf /registry/controllerrevisions/kube-system/kube-proxy-9978ddf98 /registry/controllerrevisions/metallb-system/speaker-55466f8f44 /registry/csinodes/lenovo-master /registry/csinodes/lenovo-node1 /registry/csinodes/lenovo-node2 /registry/csinodes/lenovo-node3 /registry/daemonsets/kube-system/calico-node /registry/daemonsets/kube-system/kube-proxy /registry/daemonsets/metallb-system/speaker /registry/deployments/default/nginx /registry/deployments/kube-system/calico-kube-controllers /registry/deployments/kube-system/coredns /registry/deployments/kube-system/metrics-server /registry/deployments/metallb-system/controller /registry/endpointslices/default/kubernetes /registry/endpointslices/default/nginx-gr59r /registry/endpointslices/default/ngnix-service-sffvv /registry/endpointslices/default/test1-fv4vr /registry/endpointslices/kube-system/kube-dns-hjqkv /registry/endpointslices/kube-system/kubelet-2fkdq /registry/endpointslices/kube-system/metrics-server-j47kl /registry/flowschemas/catch-all /registry/flowschemas/exempt /registry/flowschemas/global-default /registry/flowschemas/kube-controller-manager /registry/flowschemas/kube-scheduler /registry/flowschemas/kube-system-service-accounts /registry/flowschemas/service-accounts /registry/flowschemas/system-leader-election /registry/flowschemas/system-nodes /registry/flowschemas/workload-leader-election /registry/leases/kube-node-lease/lenovo-master /registry/leases/kube-node-lease/lenovo-node1 /registry/leases/kube-node-lease/lenovo-node2 /registry/leases/kube-node-lease/lenovo-node3 /registry/leases/kube-system/kube-controller-manager /registry/leases/kube-system/kube-scheduler /registry/masterleases/192.168.1.131 /registry/minions/lenovo-master /registry/minions/lenovo-node1 /registry/minions/lenovo-node2 /registry/minions/lenovo-node3 /registry/namespaces/default /registry/namespaces/kube-node-lease /registry/namespaces/kube-public /registry/namespaces/kube-system /registry/namespaces/metallb-system /registry/namespaces/quota-mem-cpu /registry/poddisruptionbudgets/kube-system/calico-kube-controllers /registry/pods/default/ng /registry/pods/default/nginx-b4c9f744d-6fqjs /registry/pods/default/nginx-b4c9f744d-hvdsh /registry/pods/default/nginx-hpa-6c4758554f-99h7l /registry/pods/default/nginx-hpa-6c4758554f-tqrp9 /registry/pods/default/nginx-hpa-6c4758554f-zf4rd /registry/pods/kube-system/calico-kube-controllers-664b5654ff-lmfjw /registry/pods/kube-system/calico-node-6vtln /registry/pods/kube-system/calico-node-9psrj /registry/pods/kube-system/calico-node-n64kf /registry/pods/kube-system/calico-node-s4gnp /registry/pods/kube-system/coredns-74ff55c5b-n942q /registry/pods/kube-system/coredns-74ff55c5b-vnm7t /registry/pods/kube-system/etcd-lenovo-node1 /registry/pods/kube-system/kube-apiserver-lenovo-node1 /registry/pods/kube-system/kube-controller-manager-lenovo-node1 /registry/pods/kube-system/kube-proxy-dxtr2 /registry/pods/kube-system/kube-proxy-r7jpl /registry/pods/kube-system/kube-proxy-sb4b6 /registry/pods/kube-system/kube-proxy-v9xck /registry/pods/kube-system/kube-scheduler-lenovo-node1 /registry/pods/kube-system/metrics-server-666b5bc478-8624s /registry/pods/metallb-system/controller-65db86ddc6-q6zvx /registry/pods/metallb-system/speaker-6mzwx /registry/pods/metallb-system/speaker-btrtz /registry/pods/metallb-system/speaker-pxf28 /registry/podsecuritypolicy/controller /registry/podsecuritypolicy/speaker /registry/priorityclasses/system-cluster-critical /registry/priorityclasses/system-node-critical /registry/prioritylevelconfigurations/catch-all /registry/prioritylevelconfigurations/exempt /registry/prioritylevelconfigurations/global-default /registry/prioritylevelconfigurations/leader-election /registry/prioritylevelconfigurations/system /registry/prioritylevelconfigurations/workload-high /registry/prioritylevelconfigurations/workload-low /registry/ranges/serviceips /registry/ranges/servicenodeports /registry/replicasets/default/nginx-6799fc88d8 /registry/replicasets/default/nginx-6c54d6848f /registry/replicasets/default/nginx-b4c9f744d /registry/replicasets/kube-system/calico-kube-controllers-664b5654ff /registry/replicasets/kube-system/coredns-74ff55c5b /registry/replicasets/kube-system/metrics-server-666b5bc478 /registry/replicasets/metallb-system/controller-65db86ddc6 /registry/rolebindings/kube-public/kubeadm:bootstrap-signer-clusterinfo /registry/rolebindings/kube-public/system:controller:bootstrap-signer /registry/rolebindings/kube-system/kube-proxy /registry/rolebindings/kube-system/kubeadm:kubelet-config-1.20 /registry/rolebindings/kube-system/kubeadm:nodes-kubeadm-config /registry/rolebindings/kube-system/metrics-server-auth-reader /registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader /registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager /registry/rolebindings/kube-system/system::leader-locking-kube-scheduler /registry/rolebindings/kube-system/system:controller:bootstrap-signer /registry/rolebindings/kube-system/system:controller:cloud-provider /registry/rolebindings/kube-system/system:controller:token-cleaner /registry/rolebindings/metallb-system/config-watcher /registry/rolebindings/metallb-system/pod-lister /registry/roles/kube-public/kubeadm:bootstrap-signer-clusterinfo /registry/roles/kube-public/system:controller:bootstrap-signer /registry/roles/kube-system/extension-apiserver-authentication-reader /registry/roles/kube-system/kube-proxy /registry/roles/kube-system/kubeadm:kubelet-config-1.20 /registry/roles/kube-system/kubeadm:nodes-kubeadm-config /registry/roles/kube-system/system::leader-locking-kube-controller-manager /registry/roles/kube-system/system::leader-locking-kube-scheduler /registry/roles/kube-system/system:controller:bootstrap-signer /registry/roles/kube-system/system:controller:cloud-provider /registry/roles/kube-system/system:controller:token-cleaner /registry/roles/metallb-system/config-watcher /registry/roles/metallb-system/pod-lister /registry/secrets/default/default-token-qknwm /registry/secrets/kube-node-lease/default-token-xhxwz /registry/secrets/kube-public/default-token-767ld /registry/secrets/kube-system/attachdetach-controller-token-rm5kc /registry/secrets/kube-system/bootstrap-signer-token-fwnzd /registry/secrets/kube-system/calico-etcd-secrets /registry/secrets/kube-system/calico-kube-controllers-token-h4trc /registry/secrets/kube-system/calico-node-token-js7t8 /registry/secrets/kube-system/certificate-controller-token-pk96t /registry/secrets/kube-system/clusterrole-aggregation-controller-token-xxb5s /registry/secrets/kube-system/coredns-token-b2z2f /registry/secrets/kube-system/cronjob-controller-token-54p6d /registry/secrets/kube-system/daemon-set-controller-token-sbtsk /registry/secrets/kube-system/default-token-9fhbc /registry/secrets/kube-system/deployment-controller-token-swxcw /registry/secrets/kube-system/disruption-controller-token-2rr6w /registry/secrets/kube-system/endpoint-controller-token-fmjrz /registry/secrets/kube-system/endpointslice-controller-token-sbn6n /registry/secrets/kube-system/endpointslicemirroring-controller-token-qrld7 /registry/secrets/kube-system/expand-controller-token-tfgpk /registry/secrets/kube-system/generic-garbage-collector-token-nc855 /registry/secrets/kube-system/horizontal-pod-autoscaler-token-h8rl9 /registry/secrets/kube-system/job-controller-token-d7lnj /registry/secrets/kube-system/kube-proxy-token-9snst /registry/secrets/kube-system/metrics-server-token-szltz /registry/secrets/kube-system/namespace-controller-token-rwn7m /registry/secrets/kube-system/node-controller-token-zqvxv /registry/secrets/kube-system/persistent-volume-binder-token-6vj8p /registry/secrets/kube-system/pod-garbage-collector-token-77gp8 /registry/secrets/kube-system/pv-protection-controller-token-49c2m /registry/secrets/kube-system/pvc-protection-controller-token-twhrk /registry/secrets/kube-system/replicaset-controller-token-d4bzb /registry/secrets/kube-system/replication-controller-token-7mprg /registry/secrets/kube-system/resourcequota-controller-token-x97qt /registry/secrets/kube-system/root-ca-cert-publisher-token-gr4cq /registry/secrets/kube-system/service-account-controller-token-46wxl /registry/secrets/kube-system/service-controller-token-dbnc5 /registry/secrets/kube-system/statefulset-controller-token-fxblr /registry/secrets/kube-system/token-cleaner-token-c48kq /registry/secrets/kube-system/ttl-controller-token-q5wmc /registry/secrets/metallb-system/controller-token-9vrqd /registry/secrets/metallb-system/default-token-9jw8j /registry/secrets/metallb-system/memberlist /registry/secrets/metallb-system/speaker-token-d6b7b /registry/serviceaccounts/default/default /registry/serviceaccounts/kube-node-lease/default /registry/serviceaccounts/kube-public/default /registry/serviceaccounts/kube-system/attachdetach-controller /registry/serviceaccounts/kube-system/bootstrap-signer /registry/serviceaccounts/kube-system/calico-kube-controllers /registry/serviceaccounts/kube-system/calico-node /registry/serviceaccounts/kube-system/certificate-controller /registry/serviceaccounts/kube-system/clusterrole-aggregation-controller /registry/serviceaccounts/kube-system/coredns /registry/serviceaccounts/kube-system/cronjob-controller /registry/serviceaccounts/kube-system/daemon-set-controller /registry/serviceaccounts/kube-system/default /registry/serviceaccounts/kube-system/deployment-controller /registry/serviceaccounts/kube-system/disruption-controller /registry/serviceaccounts/kube-system/endpoint-controller /registry/serviceaccounts/kube-system/endpointslice-controller /registry/serviceaccounts/kube-system/endpointslicemirroring-controller /registry/serviceaccounts/kube-system/expand-controller /registry/serviceaccounts/kube-system/generic-garbage-collector /registry/serviceaccounts/kube-system/horizontal-pod-autoscaler /registry/serviceaccounts/kube-system/job-controller /registry/serviceaccounts/kube-system/kube-proxy /registry/serviceaccounts/kube-system/metrics-server /registry/serviceaccounts/kube-system/namespace-controller /registry/serviceaccounts/kube-system/node-controller /registry/serviceaccounts/kube-system/persistent-volume-binder /registry/serviceaccounts/kube-system/pod-garbage-collector /registry/serviceaccounts/kube-system/pv-protection-controller /registry/serviceaccounts/kube-system/pvc-protection-controller /registry/serviceaccounts/kube-system/replicaset-controller /registry/serviceaccounts/kube-system/replication-controller /registry/serviceaccounts/kube-system/resourcequota-controller /registry/serviceaccounts/kube-system/root-ca-cert-publisher /registry/serviceaccounts/kube-system/service-account-controller /registry/serviceaccounts/kube-system/service-controller /registry/serviceaccounts/kube-system/statefulset-controller /registry/serviceaccounts/kube-system/token-cleaner /registry/serviceaccounts/kube-system/ttl-controller /registry/serviceaccounts/metallb-system/controller /registry/serviceaccounts/metallb-system/default /registry/serviceaccounts/metallb-system/speaker /registry/services/endpoints/default/kubernetes /registry/services/endpoints/default/nginx /registry/services/endpoints/default/ngnix-service /registry/services/endpoints/default/test1 /registry/services/endpoints/kube-system/kube-dns /registry/services/endpoints/kube-system/kubelet /registry/services/endpoints/kube-system/metrics-server /registry/services/specs/default/kubernetes /registry/services/specs/default/nginx /registry/services/specs/default/ngnix-service /registry/services/specs/default/test1 /registry/services/specs/kube-system/kube-dns /registry/services/specs/kube-system/kubelet /registry/services/specs/kube-system/metrics-server
Now lets check all pods with all namespaces so something like kubectl get pods --all-namespaces
kubectl exec -it etcd-lenovo-node1 -n kube-system -- /bin/sh -c "ETCDCTL_API=3 etcdctl \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ get /registry/pods --prefix --keys-only
Output:
➜ etcd git:(master) ✗ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default ng 1/1 Running 0 11d default nginx-b4c9f744d-6fqjs 1/1 Running 0 10d default nginx-b4c9f744d-hvdsh 1/1 Running 0 10d default nginx-hpa-6c4758554f-99h7l 1/1 Terminating 0 45d default nginx-hpa-6c4758554f-tqrp9 1/1 Terminating 0 45d default nginx-hpa-6c4758554f-zf4rd 1/1 Terminating 0 45d kube-system calico-kube-controllers-664b5654ff-lmfjw 1/1 Running 0 46d kube-system calico-node-6vtln 1/1 Running 0 46d kube-system calico-node-9psrj 1/1 Running 0 46d kube-system calico-node-n64kf 1/1 Running 0 46d kube-system calico-node-s4gnp 1/1 Running 0 46d kube-system coredns-74ff55c5b-n942q 1/1 Running 0 47d kube-system coredns-74ff55c5b-vnm7t 1/1 Running 0 47d kube-system etcd-lenovo-node1 1/1 Running 0 47d kube-system kube-apiserver-lenovo-node1 1/1 Running 0 47d kube-system kube-controller-manager-lenovo-node1 1/1 Running 0 47d kube-system kube-proxy-dxtr2 1/1 Running 0 47d kube-system kube-proxy-r7jpl 1/1 Running 0 47d kube-system kube-proxy-sb4b6 1/1 Running 0 47d kube-system kube-proxy-v9xck 1/1 Running 0 47d kube-system kube-scheduler-lenovo-node1 1/1 Running 0 47d kube-system metrics-server-666b5bc478-8624s 1/1 Running 0 45d metallb-system controller-65db86ddc6-q6zvx 1/1 Running 0 33d metallb-system speaker-6mzwx 1/1 Running 0 33d metallb-system speaker-btrtz 1/1 Running 0 33d metallb-system speaker-pxf28 1/1 Running 0 33d
From the reference above - I can see pods are under /registry/pods and then the next key is namespace so to get pods in kubesystem we need to use key /registry/pods/kube-system/
We might observe something interesting watching one of the pods:
kubectl exec -it etcd-lenovo-node1 -n kube-system -- /bin/sh -c "ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ watch /registry/pods/default/nginx-b4c9f744d-6fqjs "
and I will just add new label in another terminal
kc label pods pod nginx-b4c9f744d-6fqjs my-new-label=test
we can see straight away the change in watch command :)
Accessing etcd from host
If we want to access etcdctl locally (locally means - still need to be on one of the nodes) we just need to install etcd-client
on ubuntu
sudo apt-get install etcd-client
and then we should be able to get all etcd instances with
sudo ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list
Build cheapest kubernetes i7 cluster
How I've build 4 nodes kubernetes i7 16GB cluster
I always wanted to build my own k8s cluster, but couldn't see any reliable "budget" 3-4 nodes option to go for - yes there are a lot of attempts of building kubernetes which are:
raspberry pi which are not x86 compatible so you wont run a lot of docker images there
some tv hdmi sticks - better than raspberry but cpus are quite weak.
So one day my laptops display has broken.
It was i7 with 16gb RAM it I have identified malfunction was was the port to lcd display on the motherboard as the hdmi output worked fine.
I have replaced the motherboard and doing so I had the old spare but kida headless without display well hdmi output still worked).
Also when buying replacement motherboard I have discovered there are sold for 100 quid (coming from some after leasing refurbished laptops) so probably the cheapes k8s cluster we can have on i7's and 16GB RAM per node.

The parts:
Note: I've used 00JT811 because I had spare one you can use any i7/i5 laptops motherboard I have just found that actually on ebay there are i5 00JT371 i5-5200U for 50quid !! and it's the same number of cores/threads just at the lower frequency 2.60 GHz vs 2.20GHz base freq.
3 x £120 Lenovo Yoga X1 Carbon 00JT811 Core i7-6600U 2.6GHz Laptop Motherboard [from ebay] || or 4x £50 i5 00JT371 i5-5200U
4 x £12.93 Sandisk 120GB SSD [from Amazon warehouse deals] - optional could do it network bootable!!!
4 x lenovo laptop chargin adapters (had old ones so didnt have top buy)
3 x 1GB Ethernet over oneLink+ interface (7each)
1 x 1GB Ethernet over OneLink+ dock station (I had old one)
1 x used battery for master node (to secure etcd even better from power failure) £10
1 x Server Case to pack it up 2U Short ATX Chassis - Low Profile PCI Card Support - 390mm Depth £47
Overall for mine with i7: £489
Alternative with i5: £329
I had old intell skull i7 as my previous lab server so I have sold it for £400 so in the end by just adding 89 quid and week of work during evenings I had 4 nodes powerfull kubernetes cluster.
Problem 1 - How to stack it on each other?
I had to find some way how to put the motherboards on each other safetly - fortunately I had ender3 3dprinter and bit of knowledge of 3d prototyping in blender so I have just designed simple pins on which I sould stack the motherboards.


Problem 2 - Powering it on
Solution 1 - BIOS Restore on AC Power loss - just pluggin in to power adapter should power on my motherboards
Solution 2 - Add power button - unfortunatelly its almost impossibile to buy power button separately for this motherboard they are available on aliexpress for 17 quid which is ridiculous.
I have found 1 button for 5 on ebay and have reverse engineered it.
So Laptop power on switch works bit different than I though its not working like in PC ATX switch just short circuiting 2 cables - I had to check with multimeter whats happening on which pins of the port of power switch. It turned out its just adding 10Ohm resitance on button press. I have just created my own buttons and added the resistors to each of them.
Mind the power adapters are outside of the whole server case and even server cabinet - they warm up and generate extra heat so better to keep them out- the fans in cabinet will be more quiet then..


I forgot to make pic but I below the buttons there are 10 Ohm resistors soldered to one leg.

So it's pin 1 and 3 (on the ribbon positioned like below) which needs to be applied with 10Ohm resistor.

I had to do my own buttons too as case came with 2 buttons and 2 led diodes I just took off whole pcb and made my own with just buttons and printed out the plastic long buttons. Also added external usb3 extensions

Problem 3 Ethernet
I wanted it to look nice so I have designed some panel to put into server case and connect all the ethernet adapters + docking station together.




Connecting it all together inside the server case
Having all the parts printed and ready I've started assembling everything together which was the biggest fun.




Battery for master node
As seen below in the bottom - the motherboard is supplied with laptop battery - it's the master node with etcd running so it has some extra power protection (I have 2 UPSes fot the whole cabinet though)

Final effects
So in the end this little 2U box is i7 16GB x4NODES kubernetes cluster!!!

Clarification
Of course I forgot to mention (intentionally or not to lower overall costs :) ) the extra 2U case didnt fit to my 9U Cabinet anymore as there is Dream Machine Pro and some PoE Switch and a NAS there already so I had to buy bigger server cabinet 15U
Because of that I had to hire some external contractor to assembe it so this costed me 2 extra hrs spent at playground + 2 kinder surprise eggs.

So inside case it looks like this

And everything connected together


Future plans
For now I have half-automatically installed k8s on my Nodes from playbooks taken from my prev project (kubernetes cluster on vagrant ) https://github.com/greg4fun/k8s_simulation_on_vagrant , but I have in plans to make it full IaC and use Hashis Terraform.

Master temperature with lights
As seen on photos below there are leds - this is Philips Hue strip I have allready played with python API and I'm going to connect those leds to the Master node temperature readings.


Mysql on kubernetes with persistent volume and secrets
Volumes
Persistent storage with NFS
In this example I have created nfs share "PersistentVolume" on my qnap NAS which IP is 192.168.1.11 Create persistentVolume.yml
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 100Gi accessModes: - ReadWriteMany mountOptions: - nfsvers=4.1 nfs: path: /PersistentVolume/pv0001 server: 192.168.1.11 persistentVolumeReclaimPolicy: Retain
Create persistentVolumeClaim.yml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi
Secrets
The configuration of your containers should be stored in separate place to guarantee mobility (it shouldnt be hardcoded ) neither should it be stored in database The best approach is to store your configuration in environment variables for docker for instance you can store it in env files which are gitignored or env vars which you need to set during container startup. In kubernetes you have option to store all configuration like usernames, passwords, API urls etc in configmaps and secrets. Passwords shouldnt be stored in configmaps though as it is stored there in plain text.So the best choice for passwords is secrets which stores data in base64.
Create password and user and db name and encode it with base64
echo -n "MyPassword" | base64 #TXlQYXNzd29yZA== echo -n "django" | base64 # ZGphbmdv echo -n "kubernetes_test" | base64 # a3ViZXJuZXRlc190ZXN0
Apply above results to secret.yml
--- apiVersion: v1 kind: Secret metadata: name: mysql-secrets type: Opaque data: MYSQL_ROOT_PASSWORD: TXlQYXNzd29yZA== MYSQL_USER: ZGphbmdv MYSQL_PASSWORD: ZGphbmdv MYSQL_DATABASE: a3ViZXJuZXRlc190ZXN0
On your cluster create secrets.yml
kubectl create -f secrets.yml
Mysql application
Now having persistent volumeclain and secrets we can write mysql deployment file
deployment.yml
apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:5.7 ports: - containerPort: 3306 volumeMounts: - mountPath: "/var/lib/mysql" subPath: "mysql" name: mysql-data env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: MYSQL_ROOT_PASSWORD - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secrets key: MYSQL_USER - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: MYSQL_PASSWORD - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-secrets key: MYSQL_DATABASE volumes: - name: mysql-data persistentVolumeClaim: claimName: mysql-pv-claim
kubectl apply -f deployment.yml
Checking
Now we can check if our deployment was successful:
kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE mysql-deployment 1/1 1 1 66m
If somethings wrong you can always investigate with describe or logs
kubectl describe deployment mysql-deployment Name: mysql-deployment Namespace: default CreationTimestamp: Sun, 28 Jun 2020 17:02:00 +0000 Labels: app=mysql Annotations: deployment.kubernetes.io/revision: 1 Selector: app=mysql Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=mysql Containers: mysql: Image: mysql:5.7 Port: 3306/TCP Host Port: 0/TCP Environment: MYSQL_ROOT_PASSWORD: <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'mysql-secrets'> Optional: false MYSQL_USER: <set to the key 'MYSQL_USER' in secret 'mysql-secrets'> Optional: false MYSQL_PASSWORD: <set to the key 'MYSQL_PASSWORD' in secret 'mysql-secrets'> Optional: false MYSQL_DATABASE: <set to the key 'MYSQL_DATABASE' in secret 'mysql-secrets'> Optional: false Mounts: /var/lib/mysql from mysql-data (rw,path="mysql") Volumes: mysql-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mysql-pv-claim ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: mysql-deployment-579b8bb767 (1/1 replicas created) Events: <none>
Or investigate pods
kubectl get pods NAME READY STATUS RESTARTS AGE mysql-deployment-579b8bb767-mk5jx 1/1 Running 0 69m kubectl describe pod mysql-deployment-579b8bb767-mk5jx Name: mysql-deployment-579b8bb767-mk5jx Namespace: default Priority: 0 Node: worker4/192.168.50.15 Start Time: Sun, 28 Jun 2020 17:02:00 +0000 Labels: app=mysql pod-template-hash=579b8bb767 Annotations: cni.projectcalico.org/podIP: 192.168.199.131/32 Status: Running IP: 192.168.199.131 IPs: IP: 192.168.199.131 Controlled By: ReplicaSet/mysql-deployment-579b8bb767 Containers: mysql: Container ID: docker://b755c731e9b72812040d62315a2499d05cdaa6b8425e6b357fa19f1e9d6aed2c Image: mysql:5.7 Image ID: docker-pullable://mysql@sha256:32f9d9a069f7a735e28fd44ea944d53c61f990ba71460c5c183e610854ca4854 Port: 3306/TCP Host Port: 0/TCP State: Running Started: Sun, 28 Jun 2020 17:02:02 +0000 Ready: True Restart Count: 0 Environment: MYSQL_ROOT_PASSWORD: <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'mysql-secrets'> Optional: false MYSQL_USER: <set to the key 'MYSQL_USER' in secret 'mysql-secrets'> Optional: false MYSQL_PASSWORD: <set to the key 'MYSQL_PASSWORD' in secret 'mysql-secrets'> Optional: false MYSQL_DATABASE: <set to the key 'MYSQL_DATABASE' in secret 'mysql-secrets'> Optional: false Mounts: /var/lib/mysql from mysql-data (rw,path="mysql") /var/run/secrets/kubernetes.io/serviceaccount from default-token-4wtnw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: mysql-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mysql-pv-claim ReadOnly: false default-token-4wtnw: Type: Secret (a volume populated by a Secret) SecretName: default-token-4wtnw Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none>
Or logs from pod
kubectl logs mysql-deployment-579b8bb767-mk5jx 2020-06-28T17:02:13.695295Z 0 [Note] IPv6 is available. 2020-06-28T17:02:13.695350Z 0 [Note] - '::' resolves to '::'; 2020-06-28T17:02:13.695392Z 0 [Note] Server socket created on IP: '::'. 2020-06-28T17:02:13.695906Z 0 [Warning] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2020-06-28T17:02:13.703856Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200628 17:02:13 2020-06-28T17:02:13.746239Z 0 [Note] Event Scheduler: Loaded 0 events 2020-06-28T17:02:13.746461Z 0 [Note] mysqld: ready for connections. Version: '5.7.30' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
Where we can see our mysql server is up and running
We can now test if our secrets were applied by running exact same exec syntax as in docker NEVER PROVIDE PASSWORD IN COMMAND LINE THIS IS JUST FOR DEMONSTRATION PURPOSES if you do just -p you will be prompted for password
kubectl exec -it mysql-deployment-579b8bb767-mk5jx -- mysql -u root -pMyPassword mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | kubernetes_test | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.02 sec)
We can see initial db kubernetes_test was created also lets try to log in to it with user and pass set up
kubectl exec -it mysql-deployment-579b8bb767-mk5jx -- mysql -u django -pdjango kubernetes_test Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
Everything works as expected!!
Kuberenetes NFS persistent volume
k8s_nfs_persistent_volume
Create nfs persistent volume:
What you need
-
NFS Server I have used NFS already installed on my QNAP NAS (You need to enable NO_ROOT_SQUASH on permissions)
-
K8s cluster
Now having your NFS share here 192.168.1.11/Persistentvolume/ you can try if it works with mount
sudo mount -t nfs 192.168.1.11:/PersistentVolume /mnt/PersistentVolume
Later on you can secure access with password.
If everything works fine we need persistent volume on our cluster
persistentvolume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
mountOptions:
- nfsvers=4.1
nfs:
path: /PersistentVolume/pv0001
server: 192.168.1.11
persistentVolumeReclaimPolicy: Retain
Apply above yaml to the cluster
kubectl apply -f persistentvolume.yml
Now we need to declare persistent volume claim
persistentvolumeclaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
Apply
kubectl apply -f persistentvolumeclaim.yml
Check if it has been bound:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 100Gi RWX Retain Bound default/mysql-pv-claim 2d4h
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pv0001 100Gi RWX 2d4h
Build fully working zabbix server with database in seconds thanks to docker
To install zabbix server quickly zabbix comes with help as they have prebuild their product with docker images. There is lots of official zabbix images on dockerhub so it can just overwhelm you. There are mixes of all different possibilities like zabbix with mysql or postgres or either sqlite, zabbix served bt nginx or apache or java gateway. Depending on stack which is closest to you you can easily build docker-compose that will just run selected stack in seconds. My pick was nginx mysql so to set up fully running zabbix server we need 3 images
mysql-server
zabbix-web - web interface
zabbix-server - main zabbix process responsible for polling and trapping data and sending notifications to users.
In addition you can add postfix mail server for notifying users but its not a must as you can use your own mail server if so - just remove postfix service from example below.
Notice(you may want to use some specific versions or alpine versions for production env)
Create some directory (directory name is crucial here for visibility and future maitenance of your containers and volumes or networks as the name will be used as prefix for docker containers created by docker-compose and also volumes directories so it will be easier to identify in future which volume belongs to which stack In Ubuntu volumes are usually being kept in/var/lib/docker/volumes but you can mount any directory from host by just specifying absolute or relative path in service configuration so for instance for mysql in example to mount mysql_data_dir just outside of our containers folder
volumes: - '../mysql_data_dir:/var/lib/mysql'
Now within directory create docker-compose.yml with selected technologies in my case it is: #docker-compose.yml
version: '3' services: db: image: mysql:latest restart: always expose: - '3336' environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbix_password' MYSQL_ROOT_HOST: '%' volumes: - 'mysql_data_dir:/var/lib/mysql' zabbix-server: image: zabbix/zabbix-server-mysql links: - "db:mysql" - "postfix:postfix" environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbixpassassword' DB_SERVER_HOST: 'mysql' zabbix-web: image: zabbix/zabbix-web-nginx-mysql ports: - '7777:80' links: - "db:mysql" - "zabbix-server:zabbix-server" - "postfix:postfix" environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbixpassassword' DB_SERVER_HOST: 'mysql' ZBX_SERVER_HOST: "zabbix-server" PHP_TZ: "Europe/London" postfix: image: catatnight/postfix hostname: support environment: - maildomain=mydomain.com - smtp_user=admin:my_password ports: - "25:25" expose: - "25" volumes: - /etc/nginx/ssl/postfix:/etc/postfix/certs - /etc/nginx/ssl/postfix:/etc/opendkim/domainkeys volumes: mysql_data_dir: driver: local
The above solution is just enough to start zabbix server up and running in couple seconds. To do it just run: .. code-block:: bash
sudo docker-compose up
Thats it!!! You now have your zabbix running on port 7777
So what happened here docker-compose up has build and runned 3 containers by running zabbix container it discovered there are no tables in mysql and has built them.
Now you just need to add agents/servers you want to monitor. Check out adding agent in separate post
Versions: (versions I've used in this example Feb 2018):
Docker-compose: 1.17.0, build ac53b73 Docker: 17.09.1-ce, build 19e2cf6 Kernel: 4.13.0-36-generic System: Ubuntu 16.04.3 LTS
Adding zabbix agent to server
Zabbix is very powerfull tool which its using agents (or SNMP) to monitor server resources. Adding agent is easy but I had couple problems doing that when I used agent straight from my ubuntus (16.04.3) repo as there was no encryption functionality in this agent well I guess so as agent didn't recognize tls psk configuration so not very nice as by installing agent straight form repo with "sudo apt-get update && sudo apt-get install zabbix-agent" I had limited functionality and unencrypted server-agent traffic. So there are 2 options we can install zabbix agent or use zabbix agent docker container. Adding zabbix agent to host system. For current day 3.2 is the latest so please change latest accordingly of how this artcile is old. wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb sudo dpkg -i zabbix-release_3.2-1+xenial_all.deb sudo apt-get update apt-get purge zabbix-agent #remove previous if installed apt-get install zabbix-agent
Now there are 3 basic options that need to be changed in agent config file: /etc/zabbix/zabbix_agentd.conf
Server=ip of zabbix server ServerActive=ip of zabbix server Hostname=My host name
sudo service zabbix-agent restart
Add host to server through web interface:
In server go to Configuration-> Hosts -> Create host type in host name visible name public IP address opf your agent.Select group and add agent. Next is to select templates so add services you need to monitor (here linux + mysql : Template DB MySQL, Template OS Linux) after saving you should see green ZBX available label on Hosts screen Notice : I couldnt see zbx agent green icon until I added linux template / or zabbix agent template.
Seurity - Setting up PSK encryption:
sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk" Now add below lines to /etc/zabbix/zabbix_agentd.conf TLSConnect=psk TLSAccept=psk #each identity id must be different for each serverr connected to one zabbix server TLSPSKIdentity=PSK SOMETHING TLSPSKFile=/etc/zabbix/zabbix_agentd.psk sudo service zabbix-agent restart Get generated key string: cat /etc/zabbix/zabbix_agentd.psk and add encryption in zabbix server web interface : In server go to Configuration-> Hosts -> my host->encryption
Select: Connections to host PSK connections from host PSK PSK identity: PSK SOMETHING (same as in zabbixagent config file) PSK: the hash generated (content of /etc/zabbix/zabbix_agentd.psk file on agent ) now there should be greebn psk lablel and all our traffice will be encrypted
Adding mysql monitoring option:
add user credentials for mysqlclient on agent server: mysql > grant all privileges on . to zabbix@'%' identified by 'zabbixuserpassword'; use localhost or host you will be accessing mysql from % is just for test purpose to eliminate authentication problems.
Out of the topic - something about mysql remote connections and security: My best practice is not to have any remote access like @'%' to mysql on any server I manage its just dangerous and anyone can try bruteforcing and try to connect to our mysql server. Another way I saw in many places if admin creates @'%' accesses they use it without any encryption so there is plain text traffic comming from mysql-server/postgres straight to users computer which is not good (MITM etc). The best would be to have your mysql server set up with ssl certificate but its not popular practice as may be time consuming for setting up and for connecting to such server (preatty easu in mysql-workbench). Faster way to encrypt your mysql confidential data traffic is to use ssh tunnel but there is a limitation here user that needs access to mysql data needs to have ssh access to the server if this is an option just define users with localhost as source like my_db_user@localhost this is safer as you cant guarantee mysql users competence so best practice is to prevent having '%', to double secure this method do not to expose 3306 to the public and only allow localhost(unix socket) and 127.0.0.1 (remember mysqlclient unixsocket/ ip connection) to be able to connect through this port. In dockerized mysql instances when I need it to be visible I just do ports config like 127.0.0.0:3306:3306 then it will be visible to host machine only. but if user wont have ssh access to the server then only option you have is using ssl cert. So remember having user@'%' or even user@'some_ip' you still without ssl or ssh the traffic from mysql-server is still unencrypted.
Ok comming back to mysql monitoring config: add client to my.cnf in /etc/mysql or to /etc/mysql/conf.d/mysql.cnf
[client] user = zabbix password = zabbixuserpassword port = 3326 host = 127.0.0.1
add myu.cnf
mkdir -p /var/lib/zabbix/ cd /var/lib/zabbix ln -sv /etc/mysql/my.cnf
service zabbix-agent restart
Now you can add mysql template items in zabbix server .
select linux templates to see agent visibility
bug in default userparameter_mysql agent file
cat /etc/zabbix/zabbix_agentd.d/userparameter_mysql.conf redirect error to stdout to grep later
UserParameter=mysql.ping,HOME=/var/lib/zabbix mysqladmin ping 2>&1 | grep -c alive previously was UserParameter=mysql.ping,HOME=/var/lib/zabbix mysqladmin ping | grep -c alive so grep didnt work
Write your post here.
Zabbix stack with docker-compose.yml
Fully working zabbix server solution with UI and database in seconds
I wanted to install zabbix server quickly with docker but amount of zabbix images (created by zabbix) on dockerhub just overwhelmed me. To set up running zabbix server we need 3 images * choice of sql DB * zabbix-web - web interface * zabbix-server - main zabbix process responsible for polling and trapping data and sending notifications to users.
My choice of database was MySQL so I created docker-compose file to have full stack of running zabbix server:
Notice(you may want to use alpine versions for production env) docker-compose.yml:
version: '3' services: db: image: mysql:latest restart: always expose: - '3336' environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbixpass' MYSQL_ROOT_HOST: '%' volumes: - 'mysql_data_dir:/var/lib/mysql' zabbix-server: image: zabbix/zabbix-server-mysql links: - "db:mysql" - "postfix:postfix" environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbixpass' DB_SERVER_HOST: 'mysql' zabbix-web: image: zabbix/zabbix-web-nginx-mysql ports: - '7777:80' links: - "db:mysql" - "zabbix-server:zabbix-server" - "postfix:postfix" environment: MYSQL_ROOT_PASSWORD: 'secret' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'myzabbixpass' DB_SERVER_HOST: 'mysql' ZBX_SERVER_HOST: "zabbix-server" PHP_TZ: "Europe/London" postfix: image: catatnight/postfix hostname: support environment: - maildomain=domain.com - smtp_user=admin:password ports: - "25:25" # - "465:465" # - "587:587" expose: - "25" # - "465" # - "587" volumes: - /etc/nginx/ssl/postfix:/etc/postfix/certs - /etc/nginx/ssl/postfix:/etc/opendkim/domainkeys volumes: mysql_data_dir: driver: local #- ./deployment/config_files/main-postfix-live.cf:/etc/postfix/main.cf #networks: # - backend #entrypoint: /docker-entrypoint.sh
The above solution is just enough to start zabbix server up and running in couple seconds. To run it just put yml file into some directory (directory is important as volume created for mysql will have this dir name as prefix) volumes are usually stored in /var/lib/docker/volumes and run:
sudo docker-compose up
Thats it!!! You now have your zabbix running on port 7777
So what happened here docker-compose up has build and runned 3 containers by running zabbix container it discovered there are no tables in mysql and has built them.
Now you just need to add agents/servers you want to monitor. Check out adding agent in separate post [here]
Versions: (versions I've used in this example Feb 2018):
Docker-compose: 1.17.0, build ac53b73 Docker: 17.09.1-ce, build 19e2cf6 Kernel: 4.13.0-36-generic
GIT commants I've found useful
Check files changed between branches
git diff --name-status master..devel
Check changes on file form different branch/commit
git diff commit_hash -- filename
Same as above between 2 branches/commits
git diff commit_hash master -- filename
Check full file history
git log -p -- filename
Check who broke production server :
git blame filename
Merge as one commit (need to commit afterwards) its not default like in normal merge:
git merge --squash branch
List of commits in git local storage
git reflog
Take(checkout) file from different branch/commit
git checkout develop -- filename git checkout commit_hash -- filename
Reset current branch to remote:
git reset --hard origin/current_branch git reset --hard origin/master
Save and depracate changes which were not commited
git stash git stash save -a
Restore stash (by picking selected)
git stash list git stash pop {0}
fabric - auto deployment script
Recently I wrote fabric deployment scrip maybe someone will find it usefull.
It enables possibility to run "group execute" task with
fab live_servers pull restart
or single host
fab live1 pull
All we need to do is to define group or sinle host as function afterwards I used end update decorator.
I know there could be also something like duplication of tasks with separate servers fab live1 pull live2 pull but I believe that fabric was written for distributed systems which has different paths of apps and users etc.
also roledefs with extra dict keys didn't work for me)? I want to keep this simple single/multiple host deployment commands like : fab live_servers pull, fab test pull
from fabric.api import run, env, local, get, cd from fabric.tasks import execute import inspect import sys import os import re from StringIO import StringIO # fabfile author: Grzegorz Stencel # usage: # run: fab help for examples # fab staging svnxapp:app=holdings_and_quotes,layout.py,permissions.py restart # fab test svnxlib SERVER_BRANCHES = { 'live': 'master', 'sit': 'sit', 'uat': 'uat', 'live2':'master', 'live3':'master' } # MAIN CONF SERVERS = { 'local': { 'envname': 'local', 'user': 'greg', 'host': 'localhost', 'host_string': 'localhost', 'path': os.environ.get('SITE_ROOT', '/opt/myapp/test'), 'www_root': 'http://localhost:8081/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'test': { 'envname': 'test', 'user': 'root', 'host': 'myapp-test.stencel.com', 'host_string': 'myapp-test.stencel.com', 'path': '/var/www/myapp/test/', 'www_root': 'http://myapp-test.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'uat': { 'envname': 'uat', 'user': 'myapp', 'host': 'uat.myapp2.stencel.com', 'host_string': 'uat.myapp2.stencel.com', 'key_filename': 'deploy/keys/id_rsa', 'path': '/opt/myapp/uat/', 'www_root': 'http://uat.myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'sit': { 'envname': 'sit', 'user': 'myapp', 'host': 'sit.myapp2.stencel.com', 'host_string': 'sit.myapp2.stencel.com', 'key_filename': 'deploy/keys/id_rsa', 'path': '/opt/myapp/sit/', 'www_root': 'http://sit.myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'live': { 'envname': 'live', 'user': 'myapp', 'host': '10.10.10.10', 'host_string': 'myapp2.stencel.com', 'path': '/opt/myapp/live/', 'www_root': 'http://myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'live2': { 'envname': 'live2', 'user': 'root', 'host': '10.10.10.11', 'host_string': 'live2.stencel.com', 'path': '/var/www/myapp/live/', 'www_root': 'http://myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'live3': { 'envname': 'live3', 'user': 'root', 'host': '10.10.10.12', 'host_string': 'live3.stencel.com', 'path': '/var/www/myapp/live/', 'www_root': 'http://myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, } LIVE_HOSTS = ['live', 'live2', 'live3'] def list_hosts(): """ Lists available myapp hosts """ print " Single hosts(if you want to pull from svn only to one of them):" print ' %s' % '\n '.join([a for a in SERVERS]) print " Multiple hosts" print ' live (which contains %s)' % ','.join([a for a in LIVE_HOSTS]) def test(): """ single host definition , "fab test restart" wil restart this one host """ env.update(dict(SERVERS['test'])) def localhost(): """ single host definition , "fab test restart" wil restart this one host """ env.update(dict(SERVERS['local'])) def uat(): """ single host definition , "fab uat restart" wil restart this single host """ env.update(dict(SERVERS['uat'])) def sit(): """ single host """ env.update(dict(SERVERS['sit'])) # SERVERS GRcompanyS DEFINITION def live(): """ multiple grcompany of hosts - running: "fab live restart" will restart all live servers """ env['hosts'] = [SERVERS[a]['host'] for a in LIVE_HOSTS] # env.update(dict(SERVERS['staging'])) def env_update(func): """ Decorator - needs to be added to each task in fabricfile - for multiple host task execution """ def func_wrapper(*args, **kwargs): if not len(env.hosts): return func(*args, **kwargs) else: env.update(dict(SERVERS[filter(lambda x: SERVERS[x]['host'] == env.host, MyApp_SERVERS)[0]])) func(*args, **kwargs) return func_wrapper @env_update def bundle_media(): """ bundles media like css and js to one file. example: fab test bundle_media """ # export DJANGO_SETTINGS_MODULE=settings #run("cd {0} && source settings/{1}-config.sh && python scripts/bundle_media.py".format(env.path,env.envname)) run("source /usr/share/virtualenvwrapper/virtualenvwrapper.sh && workon {0} && python scripts/bundle_media.py".format("%s-myapp" % env.envname if env.envname<> 'live' else 'MyApp-test')) #change live venv to be live-MyApp def _valid_branch(env): branch = run("cd {0} && git rev-parse --abbrev-ref HEAD".format(env.path)) return branch == SERVER_BRANCHES[env.envname] and not env.envname=='local' @env_update def pull(*args, **kwargs): if _valid_branch(env): with cd(env.path): run("git fetch origin") run("git reset --hard origin/%s" % branch) else: print "Error : Server is checked out to wrong branch!!!" #run('git fetch --quiet') #run('git fetch --tags --quiet') @env_update def reload(): """ Reload specified servers - kills unused gunicorn workers but waits workers with old code to finish processing. """ bundle_media() #if env.envname in ('uat', 'staging', 'live'): f = StringIO() get("/opt/myapp/%s/pid" % env.envname,f) pid = re.search(r'\d+',f.getvalue()).group() run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname) run("kill -HUP %s" % pid) @env_update def restart(): """ Hard restarts specified servers """ bundle_media() run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname) run("supervisorctl stop myapp-%s && supervisorctl start MyApp-%s" % (env.envname,env.envname)) run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname) def help(): fabric_functions = ['run', 'execute', 'local', 'func_wrapper'] functions = set([obj.__name__ if obj.__name__ not in fabric_functions else '' for name, obj in inspect.getmembers(sys.modules[__name__]) if inspect.isfunction(obj)]) functions.remove('') print "usage: \n fab [host/grcompany of hosts] [commands] (optional command with arguments command:kwarg=val,arg1,arg2,arg3)" print "\navailable servers:" list_hosts() print "\ncommands:\n %s" % ', '.join([a for a in functions]) print "\nexamples:\n staging svnxapp:app=holdings_and_quotes,layout.py,permissions.py restart" print " fab test restart" print " fab staging svnxapp:app=holdings_and_quotes,lib/quote.py,layout.py,models.py" print " fab staging svnxapp:app=holdings_and_quotes,lib/quote.py restart" print " fab test build" print " fab test bundle_media restart" print " For svnx whole app (comma in the end):" print " fab test svnxapp:app=medrep," print " For global lib:" print " fab test svnxlib" print " For whole global media:" print " fab test svnxmedia:" print " For global media file:" print " fab test svnxmedia:javascript" print " fab test svnxmedia:javascript/company/checklist.js" print "\nIf .js file in args like : fab staging svnxapp:app=holdings_and_quotes,media/js/quote.js,layout.py,models.py" print "It will bundle media itself" print "Restart test staging without params:\n fab restart" for f in functions: print f print globals()[f].__doc__ print "\n" @env_update def accessguni(): run("tail /var/log/myapp/access-%s.log" % env.envname.upper() ) @env_update def accessgunilive(): run("tail -f /var/log/myapp/access-%s.log" % env.envname.upper() ) @env_update def errorguni(): run("tail /var/log/myapp/error-%s.log" % env.envname.upper() ) @env_update def errorgunilive(): run("tail -f /var/log/myapp/error.log" % env.envname.upper() ) def hostname(): run('uname -a') @env_update def uptime(): run('uptime')