Loading…

Managing Docker Containers Using An Ansible Container

05 November 2016

In this article we will learn about a new automation tool called Ansible. This tool can be used in IT infrastructure to deploy and manage applications on multiple nodes from one server. One of the most important advantages of Ansible is that it is agent-less,  which means that you do not have to install any extra packages on the nodes you are managing, all the modifications are done on one management node.

First let’s create a Docker image with Alpine Linux that will contain the extra packages we need for an Ansible container. This image will be helpful in case we will need to create more administration nodes:

purple@docker:~ # docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:3dcdb92d7432d56604d4545cbd324b14e647b313626d99b889d0626de158f73a
Status: Image is up to date for alpine:latest
purple@docker:~ # docker run -ti --name alpine_ansible \
-e ANSIBLE_HOST_KEY_CHECKING=False alpine sh
/ # apk update
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
v3.4.1-19-g2a0114f [http://dl-cdn.alpinelinux.org/alpine/v3.4/main]
v3.4.0-75-g8d1dc52 [http://dl-cdn.alpinelinux.org/alpine/v3.4/community]
OK: 5958 distinct packages available
/ # apk add ansible openssh
(1/22) Installing libbz2 (1.0.6-r4)
[...]
OK: 81 MiB in 33 packages
/ # rm -rf /var/cache/apk/*
/ # exit
purple@docker:~ # docker commit alpine_ansible alpine_ansible:latest
sha256:4fb69345a580ee3c53f008c9a605c8c2e9b38922476ee898e242708a94c43057

After we create the image, we will need some additional configuration for the container to work properly. The first thing  we need to do would be to create a pair of SSH keys:

purple@docker:~ # mkdir ansible;cd ansible
purple@docker:~/ansible # ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/purple/.ssh/id_rsa): ansible_rsa 
[...]
c8:a2:04:e0:a3:95:65:49:db:a0:fa:f3:33:d7:4b:0c purple@docker
purple@docker:~/ansible # ls
ansible_rsa ansible_rsa.pub

Next, we will setup two test containers for our experiment and create a hosts file containing their IP addresses:

purple@docker:~ # docker pull macropin/sshd
Using default tag: latest
latest: Pulling from macropin/sshd
[...]
Status: Downloaded newer image for macropin/sshd:latest
purple@docker:~ # docker run --name ssh1 \
-v /home/purple/ansible/ansible_rsa.pub:/root/.ssh/authorized_keys \
-d macropin/sshd
2ef60cc86c39086960e7bd7b7ce00107702f10fa321a337cc24a031b6e3c1dea
purple@docker:~ # docker run --name ssh2 \
-v /home/purple/ansible/ansible_rsa.pub:/root/.ssh/authorized_keys \
-d macropin/sshd
ed68d437c92fdd4048011d5d8d1a4c60cec324b53eaeed163d6ca0d0cbbe9167
purple@docker:~ # docker exec -ti ssh1 ip addr | grep 172.17
 inet 172.17.0.4/16 scope global eth0
purple@docker:~ # docker exec -ti ssh2 ip addr | grep 172.17
 inet 172.17.0.5/16 scope global eth0


After the containers are up and running, we will create the Ansible hosts file:

[servers]
172.17.0.4
172.17.0.5

Now that we have created our image and the local setup completed, we can create our administrating server, making sure we bind the hosts file the locally:

purple@docker:~ # docker run --name ansible --net=host \
-dt -v /home/purple/ansible/hosts:/etc/ansible/hosts \
-v /home/purple/ansible/ansible_rsa:/root/.ssh/id_rsa \
alpine_ansible
38895f8ac61e514ef51a40bdc200b123de0fd6fa2c0158712548716c5b444833
purple@docker:~ # docker exec -ti ansible ansible --version
ansible 2.1.0.0
 config file = 
 configured module search path = Default w/o overrides

Note that we have used a trick consisting in using the -dt flags to start our container; these flags will keep our container running even if there are no services running inside it. The -d flag allows the container to run in the background, but the most important one is the -t flag, which allocates a pseudo-tty keeping stdin open. Another important option is the –net=host, which gives Ansible access to the host’s network in case we need it.

Finally, to check if everything is working, we will execute some commands inside the containers we are managing:

purple@docker:~ # docker exec -ti ansible ansible -m raw -a "apk update" all
172.17.0.4 | SUCCESS | rc=0 >>
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
[...]
172.17.0.5 | SUCCESS | rc=0 >>
[...]
OK: 5958 distinct packages available
purple@docker:~ # docker exec -ti ansible ansible -m raw -a "apk add python" all
172.17.0.4 | SUCCESS | rc=0 >>
(1/5) Installing libbz2 (1.0.6-r4)
[...]
172.17.0.5 | SUCCESS | rc=0 >>
(1/5) Installing libbz2 (1.0.6-r4)
[...]
Executing busybox-1.24.2-r9.trigger
OK: 73 MiB in 34 packages
purple@docker:~ # docker exec -ti ansible ansible -m ping all
172.17.0.5 | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}
172.17.0.4 | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}

If you want to run an Ansible playbook file you can use the cp command to copy it to the container. The following playbook will install nano on all containers:

---
- hosts: servers
  tasks:
    - name: Install nano
      apk: name=nano

Just create a file called playbook.yml in your home directory, paste the content above, copy it inside the container and execute it using the ansible-playbook command:

purple@docker:~ # docker cp playbook.yml ansible:/tmp/playbook.yml
purple@docker:~ # docker exec -ti ansible ansible-playbook /tmp/playbook.yml

PLAY [servers] *****************************************************************

TASK [setup] *******************************************************************
ok: [172.17.0.4]
ok: [172.17.0.5]
TASK [Install nano] ************************************************************
changed: [172.17.0.5]
changed: [172.17.0.4]

PLAY RECAP *********************************************************************
172.17.0.4 : ok=2 changed=1 unreachable=0 failed=0 
172.17.0.5 : ok=2 changed=1 unreachable=0 failed=0

 

by George Lucian Tabacar

Want to learn more?