Ansible role to Configure K8S Multi Node Cluster over AWS Cloud

Rahul Bhardwaj
6 min readMar 8, 2021

--

Introduction:

This article is divided into two parts:

  • How to make a dynamic inventory and configure AWS instances using ansible playbook.
  • How to create a role to Configure K8S Multi Node.

Prerequisites:

  • Ansible basics
  • AWS EC2 instances
  • Kubernetes basics

HOW TO CREATE AND WORK WITH DYNAMIC INVENTORY

Make sure you have ansible installed with the basic requirements like boto for the dynamic inventory. A collection of the commands you need to run from zero level would be:

yum install python3 -y
pip3 install boto3
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
yum install ansible -y

This will install ansible and python with required library for you.

Now let’s make the inventory. In dynamic inventory we will use two files ec2.py and ec2.ini from the open source community that will fetch the information of the instances we have on our AWS cloud. To download these files just use these commands in your inventory file:

wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.pywgetwget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

After downloading them make the ec2.py file executable by running

chmod +x ec2.py

Note: Use redhat OS for smoother use of these files.

After this make changes in your ansible.cfg file which will be most-likely in /etc/ansible/ansible.cfg. The main changes that you have to take care of would be:

[default]
inventory = /path_of_your_inventory
remote_user=ec2-user
roles_path = /path_of_your_roles_folder
host_key_checking = False
private_key_file = /path_of_your_.pem_file
[privilege_escalation]
become=True
become_method=sudo

The private key will be the one that you’re using for you instance authentication. It will help us authenticate our ansible request to the instances.

After this you’ll need to AWS IAM user credentials to access the resources of your AWS account. To use them you’ll just have to use the following commands:

export AWS_ACCESS_KEY_ID='AKIAQRT*******************'
export AWS_SECRET_ACCESS_KEY='6+SCCNzQ*****************************'

Remember: Every time you use a different session or your session expires( like in case of putty) you’ll again have to run these two commands.

Now, you’re all set to run the ansible ping command

ansible all -m ping
Able to ping the AWS ec2 instaces!
Able to ping the AWS ec2 instances!

To ping a particular resource you can make the use of tags:

ansible tag_k8s_node -m ping
ansible tag_ansible_mn -m ping
ansible tag_nodes_node1 -m ping

CREATING ANSIBLE ROLE FOR CONFIGURING KUBERNETES MULTI NODE CLUSTER

We’re going to create four roles in here first to launch the instances, second to install docker, third to configure k8s master and fourth to configure k8s worker.

I’m going to describe about the main.yml file first so if anyone is doing it the first time will get the idea of what we’re going to do.

The main file will interact with the resources with the help of the tags as described earlier. How this tags will work on the non-existing resources will be discussed in the upcoming section.

Role For Launching ec2 Instances

Use the ansible-galaxy command to creating a role in your directory:

ansible-galaxy role init <your_role_name>

Go to your role and then your task/main.yml file. For creating a AWS resource we run the anible commands on our localhost giving the required credentials to access our AWS account.

You have to provide the info of key_name, instance_type, image_id, tags, ansible, k8s, region, security_group, name,aws_secret_key, aws_access_key. Note: While providing the key name do not mention the extension e.g .pem or .ppk.

The tags play a key point in the whole procedures as these are the ones we’ll use to specify the configuration to take place on. We used mainly two tags: k8s( for specifying the node specific configuration like master or worker) and ansible(for common ansible configuration like installing docker). These tags will come in use in the main.yml file which make use of the all roles we’ll create here.

A problem that you might face while doing any configuration on the recently launched instances will be that ansible will pick the things from the inventory data which was there in the beginning of playbook execution. So according to that there were no aws instances. So your main.yml file which is supposed to install docker( the next step ) will fail. To solve that problem we’ll first make the playbook wait for sometime use meta module to refresh its inventory information.

After this go to the /vars/main.yml file and give values for these variables included in the task file.

By-default I’ve given the info to launch a free tier amazon linux in Mumbai region with t2.micro instance having os names as os1–2–3.

Role For Installing Docker in all Instances

Installing docker needs — nobest after the command to get the best version which is difficult to use with the common ansible modules so we’ll use command module here.

Role For Configuring Master Node

The configuration of kubernetes have a lot of steps if you’re doing it all manually. In the ansible role we’ll have to use many modules like yum, command, copy, file, blockinfile etc.

At some places the normal command module won’t work. At that place we’ll use the raw module as it works at some different level with the shell of the host.

Remember that some of the configurations like running init command might give error when run second time due to error in some other part of the code. So keep using ignore_errors at places where required.

After you create the token it will be a big challenge for you to put it in a variable and use other hosts block if you try to use the basic modules. So, for that we make use of the dummy variable. We’ll first put the token inside a register and send it to dummy variable which will keep the value in it for the entire code.

Role to Configure Worker Nodes

This role will have amost same configuration except a few steps that will be excluded in this.

To join the cluster created we’ll have to use the token created by the master node and for that we’ll take help of the dummy variable that we created earlier. Again the command module won’t here so we’ll use the shell module here.

And you’re worker node is configured.

The main.yml file will automatically run the role file for two of the instances having worker tags.

Now you can go to the master node and run the kubectl command

Kubernetes Multi Node Cluster Successfully Configured!

--

--