I use LXC on my ubuntu workstation quite often. LXD has been out for a while, and I tested it to see if I could use it as a direct replacement for LXC. And the answer is yes! LXD provides nice management tools that didn't exist in LXC, but the mechanics are the same.

This blog is a recap of what I did to setup a local installation. It assumes you already know what is LXC and how to use it.

Some differences with LXC

  • No more template scripts, LXD uses pre-built images. This has become quite common (think Docker/EC2/OpenStack Glance).
  • LXD runs as a daemon and can be managed remotely. If run locally any user in the lxd group can talk to the daemon. APIs are great.
  • Network management is way simpler, and doesn't require tweaking configuration files.

Install and configure LXD

Ubuntu 16.04 seems to come with LXD installed, but in case it isn't there:

sudo apt install lxd

You can then use the lxd init tool to setup the initial configuration:

sudo lxd init

You will have to answer questions about:

  • The storage back-end, directory or zfs. The zfs back-end is nice. It uses clones and snapshots to optimize performance when creating containers, and consumes less disk space.
  • The initial network.
  • The LXD API access: local only or exposed on a network.

The lxd command manages the daemon, use the lxc command to manage your containers.

Create and access containers

The containers creation is straightforward:

lxc launch ubuntu:16.04 c1

ubuntu:16.04 is the reference to an existing container image. If LXD cannot find it locally, it will download it from a repository (canonical's by default). The image will then be stored locally.

The container will be started after creation. Use the list or info subcommands to get information about the new container.

You will not be able to access the container using SSH by default:

$ ssh ubuntu@10.0.4.242
Permission denied (publickey).

Just like for ubuntu cloud instances the default user doesn't have a password set, and you need to use an SSH key to authenticate. An initial setup needs to be done. Not handy but should only be done once.

To configure your SSH key inside the container use the exec subcommand:

$ lxc exec c1 /bin/bash
root@c1:~# echo "YOU PUBLIC KEY" > /home/ubuntu/.ssh/authorized_keys
root@c1:~# exit
exit

Validate that you can access the container:

$ ssh ubuntu@10.0.4.242
...
ubuntu@c1:~$

Congrats!

Now you can build a new image that contains you SSH key:

$ lxc stop c1
$ lxc publish c1 --alias ubuntu-ssh
$ lxc image list | grep ubuntu-ssh
$ lxc launch ubuntu-ssh c2

What's next

St├ęphane Graber's blog contains a lot a very interesting articles about LXC/LXD.

You can setup DNS resolution in the same way you might have done for LXC.

The next step for me will be testing LXD as OpenStack nova plugin.


Note

This blog assumes that you have already setup a Ceph RadosGW with Keystone authentication.

The keystone admin token is the old, unsecure and deprecated method to authenticate against an OpenStack Identity server. It's been used to bootstrap OpenStack users and projects creation, and a good practice was to disable this feature completely to avoid bad security surprises.

But the Ceph RadosGW documentation for the stable releases - jewel as of this writing - clearly states that you need to use this admin token, and that there's no other way to connect with Keystone:

Well that's not true.

Support for authentication using a service account has been supported in quite a while, but never documented. Keystone v3 is also supported since the jewel release. The master docs have nice updates:

For keystone v3 you can use something like this in your ceph.conf:

[client.rgw.HOSTNAME]
rgw keystone url = http://keystone.host:35357
rgw keystone admin user = ceph
rgw keystone admin password = S3Cr3t
rgw keystone admin project = admin
rgw keystone admin domain = default
rgw keystone api version = 3
...

You need to create a ceph service account and give it the admin role:

$ openstack user create ceph --password-prompt
$ openstack role add --user ceph --project admin admin

Don't forget to disable the admin_token_auth filter from your paste-deploy pipeline in /etc/keystone/keystone-paste.ini.


What we want to do

Ansible provides an lxc_container module to manage LXC containers on a remote host. It is very handy but once you've deployed a container you need to manage applications deployed inside this container. Also with Ansible obviously!

A simple approach could be to write a playbook to deploy the LXC containers, then generate a static inventory, and finally use this inventory with another playbook to deploy your final application.

An other approach is to have a single playbook. The first play will deploy the LXC containers and generate an in-memory inventory using the add_host module for each container. The lxc_container module returns the IP addresses for the container (once it's started).

If your containers are connected to an internal bridge on the remote host, you also need to configure your SSH client to help ansible access them.

An example of how it can be done

The example uses the following setup:

  • the LXC hosts are listed in the [lxc_hosts] group in the inventory

  • for each host a list of containers to manage is defined in the containers variable in a host_vars/{{inventory_hostname}} file, with content similar to this:

    containers:
      - name: memcached-1
        service: memcache
      - name: mysql-1
        service: mysql
    
  • containers are connected to an lxcbr0 bridge, on a 10.0.100.0/24 network

  • containers are deployed using a custom ubuntu-ansible template, based on the original ubuntu template. The template provides some extra configuration steps to ease ansible integration:

    • installation of python2.7
    • passwordless sudo configuration
    • injection of an SSH public key

    You can use the container_command argument of the lxc_container module instead if using a custom template.

Sample playbook

The first play creates the containers (if needed), and retrieves the dynamically assigned IP addresses of all managed containers:

- hosts: lxc_hosts
  become: true
  tasks:
  - name: Create the containers
    lxc_container:
      template: ubuntu-ansible
      name: "{{ item.name }}"
      state: started
    with_items: "{{ containers }}"
    register: containers_info

  - name: Wait for the network to be setup in the containers
    when: containers_info|changed
    pause: seconds=10

  - name: Get containers info now that IPs are available
    lxc_container:
      name: "{{ item.name }}"
    with_items: "{{ containers }}"
    register: containers_info

  - name: Register the hosts in the inventory
    add_host:
      name: "{{ item.lxc_container.ips.0 }}"
      group: "{{ item.item.service }}"
    with_items: "{{ containers_info.results }}"

The following plays use the newly added groups and hosts:

- hosts: memcache
  become: true
  tasks:
  - debug: msg="memcached deployment"

- hosts: mysql
  become: true
  tasks:
  - debug: msg="mysql deployment"

SSH client configuration

In the example setup Ansible can't reach the created containers because they are connected on an isolated network. This can be dealt with an ssh configuration in ~/.ssh/config:

Host lxc1
    Hostname lxc1.domain.com
    User localadmin

Host 10.0.100.*
    User ubuntu
    ForwardAgent yes
    ProxyCommand ssh -q lxc1 nc %h %p
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

Final word

Although this example deploys LXC containers, the same process can be used for any type of VM/container deployment: EC2, OpenStack, GCE, Azure or any other platform.


I use LXC containers on my laptop for testing purpose quite a lot. I create, I destroy, I recreate. LXC is easy to use for this purpose, but one thing was missing on my setup: the automatic creation of a DNS record.

The lxc-net script used on Ubuntu to create the default lxcbr0 bridge provides almost everything to make this possible without too much effort.

The steps to set this up are:

  1. Update /etc/default/lxc-net to define a domain. This domain will be managed by the same dnsmasq process that already serves as DHCP server for the LXC containers.

    Sample configuration:

    USE_LXC_BRIDGE="true"
    LXC_BRIDGE="lxcbr0"
    LXC_ADDR="10.0.3.1"
    LXC_NETMASK="255.255.255.0"
    LXC_NETWORK="10.0.3.0/24"
    LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
    LXC_DHCP_MAX="253"
    # This is the domain name definition
    LXC_DOMAIN="lxc"
    
  2. Restart the service:

    $ sudo service lxc-net restart
    
  3. Validate that the dnsmasq process can resolve a running container IP:

    $ dig @10.0.3.1 container_name.lxc
    ...
    ;; ANSWER SECTION:
    container_name.lxc.       0       IN      A       10.0.3.156
    ...
    

A nice bonus is that the dns configuration inside a newly started container allows short name resolution:

$ sudo lxc-start -n other_container
$ sleep 10
$ sudo lxc-attach -n other_container -- ping -c 2 container_name
PING container_name (10.0.3.220) 56(84) bytes of data.
64 bytes from container_name.lxc (10.0.3.220): icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from container_name.lxc (10.0.3.220): icmp_seq=2 ttl=64 time=0.046 ms

To make this setup really usable the host must be configured to redirect DNS queries to the LXC-related dnsmasq process. By default Ubuntu configures /etc/resolv.conf to use 127.0.1.1 as DNS resolver. A dnsmasq process takes care of forwarding the requests to the proper authoritative DNS.

To setup the forwarding, add the following line to /etc/dnsmasq.d/lxc:

server=/lxc/10.0.3.1

If you're running a desktop version of Ubuntu, you probably use Network Manager. Symlink this configuration file to /etc/NetworkManager/dnsmasq.d/lxc and restart Network Manager:

$ sudo ln -s /etc/dnsmasq.d/lxc /etc/NetworkManager/dnsmasq.d/
$ sudo service network-manager restart

DNS resolution should now work on your host:

$ dig container_name.lxc
...
;; ANSWER SECTION:
container_name.lxc.   0   IN  A   10.0.3.156
...

It's been a while since the 0.13 release of python-gitlab. For the 0.14 release I spent some time writing code examples to make the first steps of using the API easier. All the objects are not yet documented, but since there's been a lot of new features and some bug fixes I wanted to get things out there.

python-gitlab is a python package and a gitlab CLI to interact with the Gitlab API.

To install the 0.14 version using pip:

pip install --upgrade python-gitlab

Download the tarballs on pypi: https://pypi.python.org/pypi/python-gitlab

Documentation is available on read the docs: http://python-gitlab.readthedocs.io/en/stable/

Report bugs and send pull request on github, contributions are very welcome: http://github.com/gpocentek/python-gitlab