What we want to do

Ansible provides an lxc_container module to manage LXC containers on a remote host. It is very handy but once you've deployed a container you need to manage applications deployed inside this container. Also with Ansible obviously!

A simple approach could be to write a playbook to deploy the LXC containers, then generate a static inventory, and finally use this inventory with another playbook to deploy your final application.

An other approach is to have a single playbook. The first play will deploy the LXC containers and generate an in-memory inventory using the add_host module for each container. The lxc_container module returns the IP addresses for the container (once it's started).

If your containers are connected to an internal bridge on the remote host, you also need to configure your SSH client to help ansible access them.

An example of how it can be done

The example uses the following setup:

  • the LXC hosts are listed in the [lxc_hosts] group in the inventory

  • for each host a list of containers to manage is defined in the containers variable in a host_vars/{{inventory_hostname}} file, with content similar to this:

    containers:
      - name: memcached-1
        service: memcache
      - name: mysql-1
        service: mysql
    
  • containers are connected to an lxcbr0 bridge, on a 10.0.100.0/24 network

  • containers are deployed using a custom ubuntu-ansible templates, based on the original ubuntu template. The template provides some extra configuration steps to ease ansible integration:

    • installation of python2.7
    • password-less sudo configuration
    • injection of an SSH public key

    You can use the container_command argument of the lxc_container module instead if using a custom template.

Sample playbook

The first play creates the containers (if needed), and retrieves the dynamically assigned IP addresses of all managed containers:

- hosts: lxc_hosts
  become: true
  tasks:
  - name: Create the containers
    lxc_container:
      template: ubuntu-ansible
      name: "{{ item.name }}"
      state: started
    with_items: "{{ containers }}"
    register: containers_info

  - name: Wait for the network to be setup in the containers
    when: containers_info|changed
    pause: seconds=10

  - name: Get containers info now that IPs are available
    lxc_container:
      name: "{{ item.name }}"
    with_items: "{{ containers }}"
    register: containers_info

  - name: Register the hosts in the inventory
    add_host:
      name: "{{ item.lxc_container.ips.0 }}"
      group: "{{ item.item.service }}"
    with_items: "{{ containers_info.results }}"

The following plays use the newly added groups and hosts:

- hosts: memcache
  become: true
  tasks:
  - debug: msg="memcached deployment"

- hosts: mysql
  become: true
  tasks:
  - debug: msg="mysql deployment"

SSH client configuration

In the example setup Ansible can't reach the created containers because they are connected on an isolated network. This can be dealt with an ssh configuration in ~/.ssh/config:

Host: lxc1
    Hostname lxc1.domain.com
    User localadmin

Host: 10.0.100.*
    User ubuntu
    ForwardAgent yes
    ProxyCommand ssh -q lxc1 nc %h %p
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

Final word

Although this example deploys LXC containers, the same process can be used for any type of VM/container deployment: EC2, OpenStack, GCE, Azure or any other platform.


The problem

Ansible playbooks that can deploy applications in multiple contexts (for example a 1-node setup for tests, and multi-node setup with HA for production) might have to deal with rather complex variable definitions. The templating system provided by Ansible is a great help, but it can be difficult and very verbose to use it sometimes.

I recently had to solve a simple problem. Depending on the installation node of the HAProxy load balancer, the 15 balanced services had to listen on different ports to avoid conflicts.

One solution

Instead of computing the selected port using the template system, I chose to develop a module that would set the chosen ports as a fact on every target. The module is called once at the beginning of the playbook, and the ports are available as a variable in all the tasks.

The module

The module takes one mandatory boolean argument, with_haproxy.

The implementation looks like this (library/get_ports.py):

INTERNAL = {
    'service1': 11001,
    'service2': 11002
}
PUBLIC = {
    'service1': 1001,
    'service2': 1002
}

def main():
    module = AnsibleModule(
        argument_spec = dict(
            with_haproxy = dict(type='bool', required=True)
        )
    )

    with_haproxy = module.params['with_haproxy']
    ports = {'public': PUBLIC}
    ports['internal'] = INTERNAL if with_haproxy else PUBLIC

    module.exit_json(changed=False, result="success",
                     ansible_facts={'ports': ports})

    from ansible.module_utils.basic import *
    main()

The ansible_facts argument name is important, it will tell ansible to register this variable as a fact, so you don't need to use the register attribute in your task.

The ports dict holds the ports information for public and internal access. The ports.internal dict is used to configure the services ports. If HAProxy is used, they have custom (INTERNAL) ports to avoid conflicting with HAProxy. Otherwise they use the official (PUBLIC) port.

The playbook

To register the facts, the first task of the playbook looks like this:

- name: Register ports
  local_action:
    module: get_ports
    with_haproxy: true|false

- debug: var=ports.internal.service1
- debug: var=ports.internal.service2

The local_action module avoids a useless connection to the targets.

Note

If you have not written modules for Ansible yet, have a look at the tutorial. They can be written in any language, although using Python makes things a lot easier.


After several years of dealing with Drupal 6 I made the switch to a static site generator for this web space. Several reasons:

  • Updating Drupal is a pain
  • Switching to Drupal 7 didn't work out of the box, and I didn't want to spend days on this
  • The content editor is web/html/wiki-style based, I prefer a simple text-based editor

There's a lot of static site generators out there, I had a few criteria to help me decide which one to used:

  • Open Source
  • Possibility to write articles in RST rather than MD
  • Written in python so I can easily modify the behovior if needed
  • Theming support, and multiple themes available - HTML/CSS is not my strong suit

The tool that seemed to best fit my needs was pelican so I gave it a go, and it worked very well for what I wanted to do.

I use a couple plugins to handle the documentation pages and the sitemap.xml generation, and the theme is based on new-bootstrap2 with some simple modifications.

I now edit this web space with vim and publish with ansible, I feel at home!


Medibuntu is going down.

The project is not really needed nowadays, except for one package: libdvdcss. This package is now maintained by Jonathan Riddell at Blue Systems. It is available in a repository hosted by VideoLAN.

To disable the Medibuntu repository and enable the libdvdcss one, use these commands:

sudo rm /etc/apt/sources.list.d/medibuntu.list
curl ftp://ftp.videolan.org/pub/debian/videolan-apt.asc | sudo apt-key add -
echo "deb ftp://ftp.videolan.org/pub/debian/stable ./" | sudo tee /etc/apt/sources.list.d/libdvdcss.list
sudo apt-get update

If you are using Ubuntu saucy, you can also install libdvdcss using an alternative method (make sure to install/upgrade the libdvdread4 package first).

I'll keep the repository online for now, at least until the Ubuntu 13.10 release. Expect the repository to be down after that. An iso image of the current state of the repository is available <http://archive.pocentek.net/medibuntu/>.

I recommend to disable the repository if you are currently using it.

Thanks to all the persons who contributed to the project (package maintainers, servers admins, ...).