Pages

Bài đăng phổ biến

Thursday, September 8, 2022

Setup cluster kubernetes (k8s) by kubeadm with calico on Production enviroment .

References :

  1. Creating a cluster with kubeadm

  2. GitHub - mmumshad/kubernetes-the-hard-way: Bootstrap Kubernetes the hard way on Vagrant on Local Machine. No scripts.

  3. Install Kubernetes from Scratch [2] - Provisioning infrastructure for Kubernetes

  4. Install Calico networking and network policy for on-premises deployments

  5. Release v3.18.4 · projectcalico/calicoctl ( calicoctl )

  6. Overlay networking

 

I. Prerequisite : Run in all worker node .

  1. Setup docker( or any other container runtime ) container on all node

Install Docker Engine on CentOS )

2. Letting iptables see bridged traffic on all node:

1cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf 2br_netfilter 3EOF 4 5cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf 6net.bridge.bridge-nf-call-ip6tables = 1 7net.bridge.bridge-nf-call-iptables = 1 8net.ipv4.ip_forward = 1 9EOF 10sudo sysctl --system

3. Change cgroup of docker to systemd :

1sudo mkdir /etc/docker 2cat <<EOF | sudo tee /etc/docker/daemon.json 3{ 4 "exec-opts": ["native.cgroupdriver=systemd"], 5 "log-driver": "json-file", 6 "log-opts": { 7 "max-size": "100m" 8 }, 9 "storage-driver": "overlay2" 10} 11EOF

then restart docker

4. Setup kubelet,kubeadm,kubectl

1cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo 2[kubernetes] 3name=Kubernetes 4baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch 5enabled=1 6gpgcheck=1 7repo_gpgcheck=1 8gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 9exclude=kubelet kubeadm kubectl 10EOF 11 12# Set SELinux in permissive mode (effectively disabling it) 13sudo setenforce 0 14sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 15yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --disableexcludes=kubernetes 16systemctl enable kubelet.service

5. Create file with content like that :

# vi /etc/NetworkManager/conf.d/99-unmanaged-devices.conf

1[keyfile] 2unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico

run below command to apply :

1systemctl restart NetworkManager

II. Demo installation environment:

  • 1 Load balancer : (10.1.1.33 (vip: 10.1.1.34) )

  • 3 server master (10.1.1.30 , 10.1.1.31, 10.1.1.33)

  • 1 server worker node ( 10.1.1.32 )

III. Setup load balancer

  • Setup keepalived ( another guide )

  • Setup haproxy ( another guide )

main content of file : haproxy.cfg

 

1frontend fe-apiserver 2 bind 10.1.1.34:6444 3 mode tcp 4 option tcplog 5 default_backend be-apiserver 6backend be-apiserver 7 mode tcp 8 option tcplog 9 option tcp-check 10 balance roundrobin 11 12 server master-1 10.1.1.30:6443 check fall 3 rise 2 13 server master-2 10.1.1.31:6443 check fall 3 rise 2 14 server master-3 10.1.1.33:6443 check fall 3 rise 2 15 16 17

 

IV. Setup masternode :

Have a lot of to setup master node If you guys want to learn to install each component this is follow link to do :


GitHub - mmumshad/kubernetes-the-hard-way: Bootstrap Kubernetes the hard way on Vagrant on Local Machine. No scripts.

In this tutorial i will describe how to install using kubeadm .

ssh into server master01 : 10.1.1.30 run it like this :

LOADBALANCER="10.1.1.34"
kubeadm init --control-plane-endpoint "$LOADBALANCER:6444" --upload-certs --pod-network-cidr 192.168.96.0/19 --service-cidr 192.168.96.0/19

--pod-network-cidr : network for pod --service-cidr: if you want coredns and kubeapi have the same subnet ,you should use the same subnet pod-network-cidr

waiting for kubeadm finish it will return like below:

1To start using your cluster, you need to run the following as a regular user: 2 3 mkdir -p $HOME/.kube 4 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 5 sudo chown $(id -u):$(id -g) $HOME/.kube/config 6 7Alternatively, if you are the root user, you can run: 8 9 export KUBECONFIG=/etc/kubernetes/admin.conf 10 11You should now deploy a pod network to the cluster. 12Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 13 https://kubernetes.io/docs/concepts/cluster-administration/addons/ 14 15You can now join any number of the control-plane node running the following command on each as root: 16 17 kubeadm join 10.1.1.34:6444 --token 2rgzgq.hzsc9n4tgmyohca4 \ 18 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823 \ 19 --control-plane --certificate-key cf9f8a63ab1d642dc6cdd7e74c12456f0f3e1498bf9a47e0d4c5aeb9a66cfb2a 20 21Please note that the certificate-key gives access to cluster sensitive data, keep it secret! 22As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 23"kubeadm init phase upload-certs --upload-certs" to reload certs afterward. 24 25Then you can join any number of worker nodes by running the following on each as root: 26 27kubeadm join 10.1.1.34:6444 --token 2rgzgq.hzsc9n4tgmyohca4 \ 28 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823


Then :

 

1 mkdir -p $HOME/.kube 2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 3 sudo chown $(id -u):$(id -g) $HOME/.kube/config

run this command to get status of worker node :

1 kubectl get node

to view node status , note: status of master node is still NotReady

Login into master02 và master03 : then run :

1kubeadm join 10.1.1.34:6444 --token 2rgzgq.hzsc9n4tgmyohca4 \ 2 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823 \ 3 --control-plane --certificate-key cf9f8a63ab1d642dc6cdd7e74c12456f0f3e1498bf9a47e0d4c5aeb9a66cfb2a

V. Join worker node

Login into each worker node then run:

1kubeadm join 10.1.1.34:6444 --token 2rgzgq.hzsc9n4tgmyohca4 \ 2 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823

:

 

1kubectl get node

status of cluster is still NotReady because we have not yet install network for cluster .

VI. Setup network cho Kubernetes :

have a lot of 3rd support CNI but in this tutorial I choose calico :

login into a master node then run these below command:

1curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O 2kubectl apply -f calico.yaml 3# kiểm tra xem đã setup xong chưa bằng cmd: 4kubectl -n kube-system get pod -w|grep calico

After done network , Status of cluster should be Ready. :grinning: :

kubectl get node

Finish!


Updated:

Calico default is enable ipip encapsulation , I switch to vxlan encapsulation because i think it's stable than ipip encapsulation .

  1. Download calicoctl from : Release v3.18.4 · projectcalico/calicoctl

  2. create a file : xyz.yaml with content like below :

apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: 192.168.96.0/19 vxlanMode: CrossSubnet natOutgoing: true

  1. run this command to apply: calicoctl apply -f xzy.yaml

  2. check : calico get ippool -o wide


Thursday, September 1, 2022

Resolved : Nginx reverse proxy for AWS IoT MQTT over TLS

Problem : you can not configure http to reverse proxy for aws iot mqtt ,  java client would return these errors : 
software.amazon.awssdk.crt.mqtt.MqttException: socket is closed.

at software.amazon.awssdk.crt.mqtt.MqttClientConnection.onConnectionComplete(MqttClientConnection.java:140)


Resolved : 


You must use nginx module: ngx_stream_proxy_module ( like network load balancer) this is a template to resolve : 


        Stream{

        map $ssl_preread_server_name $domain {
        stg-iot.yourdomain  stg-iot;
        iot.yourdomain prod-iot;
        }

        upstream stg-iot {
                server IoT-xxxxxx..ap-your_region-1.amazonaws.com:443;
        }

       upstream prod-iot {
                server IoT-xxxxxx..ap-your_region-1.amazonaws.com:443;
        }
  map $ssl_server_name $targetCert {
    stg-iot.yourdomain /etc/nginx/ssl/star_yourdomain.crt;
    iot.yourdomain /etc/nginx/ssl/star_yourdomain.crt;
  }

    map $ssl_server_name $targetCertKey {
    stg-iot.yourdomain /etc/nginx/ssl/star_yourdomain.key;
    iot.yourdomain /etc/nginx/ssl/star_yourdomain.key;
  }

        server {
                listen 443;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
                proxy_pass $domain;
                ssl_preread on;
        }

             } 

===============================================================

Feel free to use them without concern! 

Nginx reverse proxy for AWS IoT MQTT over TLS 

Nguyen Si Nhan