Bài đăng phổ biến

Thursday, September 8, 2022

Setup cluster kubernetes (k8s) by kubeadm with calico on Production enviroment .

References :

  1. Creating a cluster with kubeadm

  2. GitHub - mmumshad/kubernetes-the-hard-way: Bootstrap Kubernetes the hard way on Vagrant on Local Machine. No scripts.

  3. Install Kubernetes from Scratch [2] - Provisioning infrastructure for Kubernetes

  4. Install Calico networking and network policy for on-premises deployments

  5. Release v3.18.4 · projectcalico/calicoctl ( calicoctl )

  6. Overlay networking


I. Prerequisite : Run in all worker node .

  1. Setup docker( or any other container runtime ) container on all node

Install Docker Engine on CentOS )

2. Letting iptables see bridged traffic on all node:

1cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf 2br_netfilter 3EOF 4 5cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf 6net.bridge.bridge-nf-call-ip6tables = 1 7net.bridge.bridge-nf-call-iptables = 1 8net.ipv4.ip_forward = 1 9EOF 10sudo sysctl --system

3. Change cgroup of docker to systemd :

1sudo mkdir /etc/docker 2cat <<EOF | sudo tee /etc/docker/daemon.json 3{ 4 "exec-opts": ["native.cgroupdriver=systemd"], 5 "log-driver": "json-file", 6 "log-opts": { 7 "max-size": "100m" 8 }, 9 "storage-driver": "overlay2" 10} 11EOF

then restart docker

4. Setup kubelet,kubeadm,kubectl

1cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo 2[kubernetes] 3name=Kubernetes 4baseurl=\$basearch 5enabled=1 6gpgcheck=1 7repo_gpgcheck=1 8gpgkey= 9exclude=kubelet kubeadm kubectl 10EOF 11 12# Set SELinux in permissive mode (effectively disabling it) 13sudo setenforce 0 14sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 15yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --disableexcludes=kubernetes 16systemctl enable kubelet.service

5. Create file with content like that :

# vi /etc/NetworkManager/conf.d/99-unmanaged-devices.conf

1[keyfile] 2unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico

run below command to apply :

1systemctl restart NetworkManager

II. Demo installation environment:

  • 1 Load balancer : ( (vip: )

  • 3 server master ( ,,

  • 1 server worker node ( )

III. Setup load balancer

  • Setup keepalived ( another guide )

  • Setup haproxy ( another guide )

main content of file : haproxy.cfg


1frontend fe-apiserver 2 bind 3 mode tcp 4 option tcplog 5 default_backend be-apiserver 6backend be-apiserver 7 mode tcp 8 option tcplog 9 option tcp-check 10 balance roundrobin 11 12 server master-1 check fall 3 rise 2 13 server master-2 check fall 3 rise 2 14 server master-3 check fall 3 rise 2 15 16 17


IV. Setup masternode :

Have a lot of to setup master node If you guys want to learn to install each component this is follow link to do :

GitHub - mmumshad/kubernetes-the-hard-way: Bootstrap Kubernetes the hard way on Vagrant on Local Machine. No scripts.

In this tutorial i will describe how to install using kubeadm .

ssh into server master01 : run it like this :

kubeadm init --control-plane-endpoint "$LOADBALANCER:6444" --upload-certs --pod-network-cidr --service-cidr

--pod-network-cidr : network for pod --service-cidr: if you want coredns and kubeapi have the same subnet ,you should use the same subnet pod-network-cidr

waiting for kubeadm finish it will return like below:

1To start using your cluster, you need to run the following as a regular user: 2 3 mkdir -p $HOME/.kube 4 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 5 sudo chown $(id -u):$(id -g) $HOME/.kube/config 6 7Alternatively, if you are the root user, you can run: 8 9 export KUBECONFIG=/etc/kubernetes/admin.conf 10 11You should now deploy a pod network to the cluster. 12Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 13 14 15You can now join any number of the control-plane node running the following command on each as root: 16 17 kubeadm join --token 2rgzgq.hzsc9n4tgmyohca4 \ 18 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823 \ 19 --control-plane --certificate-key cf9f8a63ab1d642dc6cdd7e74c12456f0f3e1498bf9a47e0d4c5aeb9a66cfb2a 20 21Please note that the certificate-key gives access to cluster sensitive data, keep it secret! 22As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 23"kubeadm init phase upload-certs --upload-certs" to reload certs afterward. 24 25Then you can join any number of worker nodes by running the following on each as root: 26 27kubeadm join --token 2rgzgq.hzsc9n4tgmyohca4 \ 28 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823

Then :


1 mkdir -p $HOME/.kube 2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 3 sudo chown $(id -u):$(id -g) $HOME/.kube/config

run this command to get status of worker node :

1 kubectl get node

to view node status , note: status of master node is still NotReady

Login into master02 và master03 : then run :

1kubeadm join --token 2rgzgq.hzsc9n4tgmyohca4 \ 2 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823 \ 3 --control-plane --certificate-key cf9f8a63ab1d642dc6cdd7e74c12456f0f3e1498bf9a47e0d4c5aeb9a66cfb2a

V. Join worker node

Login into each worker node then run:

1kubeadm join --token 2rgzgq.hzsc9n4tgmyohca4 \ 2 --discovery-token-ca-cert-hash sha256:b37088b13028cb53b05b2e6ac79eaaccd5c53d1030df2df873b96d1150395823



1kubectl get node

status of cluster is still NotReady because we have not yet install network for cluster .

VI. Setup network cho Kubernetes :

have a lot of 3rd support CNI but in this tutorial I choose calico :

login into a master node then run these below command:

1curl -O 2kubectl apply -f calico.yaml 3# kiểm tra xem đã setup xong chưa bằng cmd: 4kubectl -n kube-system get pod -w|grep calico

After done network , Status of cluster should be Ready. :grinning: :

kubectl get node



Calico default is enable ipip encapsulation , I switch to vxlan encapsulation because i think it's stable than ipip encapsulation .

  1. Download calicoctl from : Release v3.18.4 · projectcalico/calicoctl

  2. create a file : xyz.yaml with content like below :

apiVersion: kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: vxlanMode: CrossSubnet natOutgoing: true

  1. run this command to apply: calicoctl apply -f xzy.yaml

  2. check : calico get ippool -o wide

Thursday, September 1, 2022

Resolved : Nginx reverse proxy for AWS IoT MQTT over TLS

Problem : you can not configure http to reverse proxy for aws iot mqtt ,  java client would return these errors : socket is closed.


Resolved : 

You must use nginx module: ngx_stream_proxy_module ( like network load balancer) this is a template to resolve : 


        map $ssl_preread_server_name $domain {
        stg-iot.yourdomain  stg-iot;
        iot.yourdomain prod-iot;

        upstream stg-iot {

       upstream prod-iot {
  map $ssl_server_name $targetCert {
    stg-iot.yourdomain /etc/nginx/ssl/star_yourdomain.crt;
    iot.yourdomain /etc/nginx/ssl/star_yourdomain.crt;

    map $ssl_server_name $targetCertKey {
    stg-iot.yourdomain /etc/nginx/ssl/star_yourdomain.key;
    iot.yourdomain /etc/nginx/ssl/star_yourdomain.key;

        server {
                listen 443;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
                proxy_pass $domain;
                ssl_preread on;



Feel free to use them without concern! 

Nginx reverse proxy for AWS IoT MQTT over TLS 

Nguyen Si Nhan

Saturday, June 27, 2015

Logrotate with nginx

/path_to_accesslog/access.log {
    size 100000k
    create 0644 www root
[ ! -f /etc/nginx/logs/ ] || kill -USR1 `cat /etc/nginx/logs/`


missingok :  do not output error if logfile is missing
notifempty : do not rotate if logfile is empty
size             : Log file is rotated if it is  bigger than 100000KB
daily            : daily rotation
create 0644 www root : create new logfile with permission 644 where owner is www and group is root user.
[ ! -f /etc/nginx/logs/ ] || kill -USR1 `cat /etc/nginx/logs/`

=> tells nginx reload the log files .


Nguyen Si Nhan