
N64oy2sml1w188ps109mai67b * Ready Active Leader ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS Run the below command to verify the manager status and to view list of nodes in your cluster ~]# docker node ls This command will make our node as a manager node and we are also advertising ip address of manager in above command so that slave or worker node can join the cluster. Run the below command from the manager node(dkmanager) to initialize the cluster. ~]# systemctl restart docker Step:3 Initialize the swarm or cluster using ‘docker swarm init’ command ~]# firewall-cmd -permanent -add-port=80/tcp ~]# firewall-cmd -permanent -add-port=4789/udp ~]# firewall-cmd -permanent -add-port=7946/udp

~]# firewall-cmd -permanent -add-port=7946/tcp Open the following ports on each worker node and restart the docker service ~]# firewall-cmd -permanent -add-port=2376/tcp Restart the docker service on docker manager ~]# systemctl restart docker ~]# firewall-cmd -permanent -add-port=2377/tcp Open the following ports in the OS firewall on Docker manager using below commands ~]# firewall-cmd -permanent -add-port=2376/tcp Step:2 Open Firewall Ports on Manager and Worker Nodes Note: At the time of writing this article Docker Version 1.13 was available. Repeat above steps for workernode1 and workernode2 ~]# yum install docker-ce docker-ce-cli containerd.io –y Update the following lines in /etc/hosts file on all the servers 172.168.10.70 dkmanagerġ72.168.10.80 workernode1ġ72.168.10.90 workernode2 Step:1 Install Docker Engine on all the hostsįirst set the docker repository and then run beneath command on all the hosts.

( 172.168.10.90 ) – it will acts Docker engine or Worker Node.( 172.168.10.80 ) – it will acts Docker engine or Worker Node.(172.168.10.70 ) – It will act as manager who will manage Docker engine or hosts or worker node and it will work as Docker engine as well.
