This blog assumes that you have already setup a Ceph RadosGW with Keystone authentication.

The keystone admin token is the old, unsecure and deprecated method to authenticate against an OpenStack Identity server. It's been used to bootstrap OpenStack users and projects creation, and a good practice was to disable this feature completely to avoid bad security surprises.

But the Ceph RadosGW documentation for the stable releases - jewel as of this writing - clearly states that you need to use this admin token, and that there's no other way to connect with Keystone:

Well that's not true.

Support for authentication using a service account has been supported in quite a while, but never documented. Keystone v3 is also supported since the jewel release. The master docs have nice updates:

For keystone v3 you can use something like this in your ceph.conf:

rgw keystone url =
rgw keystone admin user = ceph
rgw keystone admin password = S3Cr3t
rgw keystone admin project = admin
rgw keystone admin domain = default
rgw keystone api version = 3

You need to create a ceph service account and give it the admin role:

$ openstack user create ceph --password-prompt
$ openstack role add --user ceph --project admin admin

Don't forget to disable the admin_token_auth filter from your paste-deploy pipeline in /etc/keystone/keystone-paste.ini.

What we want to do

Ansible provides an lxc_container module to manage LXC containers on a remote host. It is very handy but once you've deployed a container you need to manage applications deployed inside this container. Also with Ansible obviously!

A simple approach could be to write a playbook to deploy the LXC containers, then generate a static inventory, and finally use this inventory with another playbook to deploy your final application.

An other approach is to have a single playbook. The first play will deploy the LXC containers and generate an in-memory inventory using the add_host module for each container. The lxc_container module returns the IP addresses for the container (once it's started).

If your containers are connected to an internal bridge on the remote host, you also need to configure your SSH client to help ansible access them.

An example of how it can be done

The example uses the following setup:

  • the LXC hosts are listed in the [lxc_hosts] group in the inventory

  • for each host a list of containers to manage is defined in the containers variable in a host_vars/{{inventory_hostname}} file, with content similar to this:

      - name: memcached-1
        service: memcache
      - name: mysql-1
        service: mysql
  • containers are connected to an lxcbr0 bridge, on a network

  • containers are deployed using a custom ubuntu-ansible template, based on the original ubuntu template. The template provides some extra configuration steps to ease ansible integration:

    • installation of python2.7
    • passwordless sudo configuration
    • injection of an SSH public key

    You can use the container_command argument of the lxc_container module instead if using a custom template.

Sample playbook

The first play creates the containers (if needed), and retrieves the dynamically assigned IP addresses of all managed containers:

- hosts: lxc_hosts
  become: true
  - name: Create the containers
      template: ubuntu-ansible
      name: "{{ }}"
      state: started
    with_items: "{{ containers }}"
    register: containers_info

  - name: Wait for the network to be setup in the containers
    when: containers_info|changed
    pause: seconds=10

  - name: Get containers info now that IPs are available
      name: "{{ }}"
    with_items: "{{ containers }}"
    register: containers_info

  - name: Register the hosts in the inventory
      name: "{{ item.lxc_container.ips.0 }}"
      group: "{{ item.item.service }}"
    with_items: "{{ containers_info.results }}"

The following plays use the newly added groups and hosts:

- hosts: memcache
  become: true
  - debug: msg="memcached deployment"

- hosts: mysql
  become: true
  - debug: msg="mysql deployment"

SSH client configuration

In the example setup Ansible can't reach the created containers because they are connected on an isolated network. This can be dealt with an ssh configuration in ~/.ssh/config:

Host lxc1
    User localadmin

Host 10.0.100.*
    User ubuntu
    ForwardAgent yes
    ProxyCommand ssh -q lxc1 nc %h %p
    StrictHostKeyChecking no

Final word

Although this example deploys LXC containers, the same process can be used for any type of VM/container deployment: EC2, OpenStack, GCE, Azure or any other platform.

I use LXC containers on my laptop for testing purpose quite a lot. I create, I destroy, I recreate. LXC is easy to use for this purpose, but one thing was missing on my setup: the automatic creation of a DNS record.

The lxc-net script used on Ubuntu to create the default lxcbr0 bridge provides almost everything to make this possible without too much effort.

The steps to set this up are:

  1. Update /etc/default/lxc-net to define a domain. This domain will be managed by the same dnsmasq process that already serves as DHCP server for the LXC containers.

    Sample configuration:

    # This is the domain name definition
  2. Restart the service:

    $ sudo service lxc-net restart
  3. Validate that the dnsmasq process can resolve a running container IP:

    $ dig @ container_name.lxc
    container_name.lxc.       0       IN      A

A nice bonus is that the dns configuration inside a newly started container allows short name resolution:

$ sudo lxc-start -n other_container
$ sleep 10
$ sudo lxc-attach -n other_container -- ping -c 2 container_name
PING container_name ( 56(84) bytes of data.
64 bytes from container_name.lxc ( icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from container_name.lxc ( icmp_seq=2 ttl=64 time=0.046 ms

To make this setup really usable the host must be configured to redirect DNS queries to the LXC-related dnsmasq process. By default Ubuntu configures /etc/resolv.conf to use as DNS resolver. A dnsmasq process takes care of forwarding the requests to the proper authoritative DNS.

To setup the forwarding, add the following line to /etc/dnsmasq.d/lxc:


If you're running a desktop version of Ubuntu, you probably use Network Manager. Symlink this configuration file to /etc/NetworkManager/dnsmasq.d/lxc and restart Network Manager:

$ sudo ln -s /etc/dnsmasq.d/lxc /etc/NetworkManager/dnsmasq.d/
$ sudo service network-manager restart

DNS resolution should now work on your host:

$ dig container_name.lxc
container_name.lxc.   0   IN  A

It's been a while since the 0.13 release of python-gitlab. For the 0.14 release I spent some time writing code examples to make the first steps of using the API easier. All the objects are not yet documented, but since there's been a lot of new features and some bug fixes I wanted to get things out there.

python-gitlab is a python package and a gitlab CLI to interact with the Gitlab API.

To install the 0.14 version using pip:

pip install --upgrade python-gitlab

Download the tarballs on pypi:

Documentation is available on read the docs:

Report bugs and send pull request on github, contributions are very welcome:

The problem

Ansible playbooks that can deploy applications in multiple contexts (for example a 1-node setup for tests, and multi-node setup with HA for production) might have to deal with rather complex variable definitions. The templating system provided by Ansible is a great help, but it can be difficult and very verbose to use it sometimes.

I recently had to solve a simple problem. Depending on the installation node of the HAProxy load balancer, the 15 balanced services had to listen on different ports to avoid conflicts.

One solution

Instead of computing the selected port using the template system, I chose to develop a module that would set the chosen ports as a fact on every target. The module is called once at the beginning of the playbook, and the ports are available as a variable in all the tasks.

The module

The module takes one mandatory boolean argument, with_haproxy.

The implementation looks like this (library/

    'service1': 11001,
    'service2': 11002
    'service1': 1001,
    'service2': 1002

def main():
    module = AnsibleModule(
        argument_spec = dict(
            with_haproxy = dict(type='bool', required=True)

    with_haproxy = module.params['with_haproxy']
    ports = {'public': PUBLIC}
    ports['internal'] = INTERNAL if with_haproxy else PUBLIC

    module.exit_json(changed=False, result="success",
                     ansible_facts={'ports': ports})

    from ansible.module_utils.basic import *

The ansible_facts argument name is important, it will tell ansible to register this variable as a fact, so you don't need to use the register attribute in your task.

The ports dict holds the ports information for public and internal access. The ports.internal dict is used to configure the services ports. If HAProxy is used, they have custom (INTERNAL) ports to avoid conflicting with HAProxy. Otherwise they use the official (PUBLIC) port.

The playbook

To register the facts, the first task of the playbook looks like this:

- name: Register ports
    module: get_ports
    with_haproxy: true|false

- debug: var=ports.internal.service1
- debug: var=ports.internal.service2

The local_action module avoids a useless connection to the targets.


If you have not written modules for Ansible yet, have a look at the tutorial. They can be written in any language, although using Python makes things a lot easier.