
You should see a message like You can now join any number of machines by running the following on each node as root: Save the command it gives you to join nodes in the cluster, but we don't want to do that just yet. The -pod-network-cidr=10.244.0.0/16 option is a requirement for Flannel - don't change that network address!
Install kubernetes cluster rhel 7 install#
Typically, you would need to install the CNI packages, but they're already installed in this case. Sudo systemctl enable kubelet & systemctl start kubelet Reach out to us at for any upskilling needs of your team.Sudo cat /etc//kubernetes.repo name=Kubernetes baseurl= enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey= EOF If it is not working, try reloading the root profile by running the command below and try hitting the previous command. You should be able to see a lot of pods in the output of above command. You should be able to see the list of two nodes and both should be in READY state. Take a pause of 2 minutes and see if the nodes are ready run it on master node kubectl get nodes We are going to use Calico to create that overlay network.Ĭreate an Overlay network using Calico CNI plugin on Master nodeįor calico networking copy-paste these commands and apply on the master node only sudo kubectl apply -f This overlay network is created by third-party CNI plugins. That is because Kubernetes needs an “overlay network” so that pod-to-pod communication can happen properly. The output of the command above will show both the nodes in NOTREADY state. Now our cluster is created, you can see these by using the command below: kubectl get nodes On the master node as well as the worker node initialize KUBECONFIG so that kubectl commands can be executed. copy kubeadm join command from output of kubeadm init on master node. This will also generate a kubeadm join command with some tokens.Īfter initializing the cluster on the master node. Sudo chown $(id -u):$(id -g) $HOME/.kube/config Sudo cp -i /etc/kubernetes/nf $HOME/.kube/config Either copy and paste that in the same terminal or copy it from here and apply it on the master’s terminal. Following commands will appear on the output screen of the master node. # sudo kubeadm init -pod-network-cidr=192.168.0.0/16 #Do this only if proper CPU cores are available sudo kubeadm init -pod-network-cidr=192.168.0.0/16 -ignore-preflight.

On the master node initialize the cluster. Systemctl enable kubelet & systemctl start kubele Only on the Master Node: Install kubelet, kubeadm and kubectl start kubelet daemon yum install -y kubelet kubeadm kubectl -disableexcludes=kubernetes

Kubernetes needs to have access to kernel’s IP6 table and so we need to do some more modifications. Sysctl command is used to modify kernel parameters at runtime. Systemctl enable docker & systemctl start dockerĬreate proper yum repo files so that we can use yum commands to install the components of Kubernetes. Also, enable the docker service so that the docker service starts on system restarts. On both the master and worker nodes:īe a root user. Hope you are ready with all that is written in the prerequisites. There are a few commands that we need to run on the master as well as the worker node. Henceforward we will call them master and worker nodes.

Decide which EC2 instance would be the master/manager and which would be worker/slave.

