Creating and using LXC containers with Ansible in-memory inventory

 ansible  lxc  Mon 10 October 2016

What we want to do

Ansible provides an lxc_container module to manage LXC containers on a remote host. It is very handy but once you've deployed a container you need to manage applications deployed inside this container. Also with Ansible obviously!

A simple approach could be to write a playbook to deploy the LXC containers, then generate a static inventory, and finally use this inventory with another playbook to deploy your final application.

An other approach is to have a single playbook. The first play will deploy the LXC containers and generate an in-memory inventory using the add_host module for each container. The lxc_container module returns the IP addresses for the container (once it's started).

If your containers are connected to an internal bridge on the remote host, you also need to configure your SSH client to help ansible access them.

An example of how it can be done

The example uses the following setup:

  • the LXC hosts are listed in the [lxc_hosts] group in the inventory

  • for each host a list of containers to manage is defined in the containers variable in a host_vars/{{inventory_hostname}} file, with content similar to this:

    containers:
      - name: memcached-1
        service: memcache
      - name: mysql-1
        service: mysql
    
  • containers are connected to an lxcbr0 bridge, on a 10.0.100.0/24 network

  • containers are deployed using a custom ubuntu-ansible template, based on the original ubuntu template. The template provides some extra configuration steps to ease ansible integration:

    • installation of python2.7
    • passwordless sudo configuration
    • injection of an SSH public key

    You can use the container_command argument of the lxc_container module instead if using a custom template.

Sample playbook

The first play creates the containers (if needed), and retrieves the dynamically assigned IP addresses of all managed containers:

- hosts: lxc_hosts
  become: true
  tasks:
  - name: Create the containers
    lxc_container:
      template: ubuntu-ansible
      name: "{{ item.name }}"
      state: started
    with_items: "{{ containers }}"
    register: containers_info

  - name: Wait for the network to be setup in the containers
    when: containers_info|changed
    pause: seconds=10

  - name: Get containers info now that IPs are available
    lxc_container:
      name: "{{ item.name }}"
    with_items: "{{ containers }}"
    register: containers_info

  - name: Register the hosts in the inventory
    add_host:
      name: "{{ item.lxc_container.ips.0 }}"
      group: "{{ item.item.service }}"
    with_items: "{{ containers_info.results }}"

The following plays use the newly added groups and hosts:

- hosts: memcache
  become: true
  tasks:
  - debug: msg="memcached deployment"

- hosts: mysql
  become: true
  tasks:
  - debug: msg="mysql deployment"

SSH client configuration

In the example setup Ansible can't reach the created containers because they are connected on an isolated network. This can be dealt with an ssh configuration in ~/.ssh/config:

Host lxc1
    Hostname lxc1.domain.com
    User localadmin

Host 10.0.100.*
    User ubuntu
    ForwardAgent yes
    ProxyCommand ssh -q lxc1 nc %h %p
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

Final word

Although this example deploys LXC containers, the same process can be used for any type of VM/container deployment: EC2, OpenStack, GCE, Azure or any other platform.

Comments !