OpenShift Origin on a single node

 openshift  ansible  Sat 09 December 2017

I needed to deploy an OpenShift Origin instance for testing purposes. This article describes how I used openshift-ansible to deploy the software.

Existing tools

There are several solutions to do this:

These solutions work fine but provide a limited set of features by default.

Environment

I used a x86 physical server for the deployment:

  • 8 cores
  • 32G RAM
  • 2 x 1T disks

OpenShift and the ansible playbook only support RedHat-like distributions. I used a minimal CentOS 7.4 installation without SELinux, and without firewalld.

The machine DNS name is op1.pocentek.net.

Docker setup

The OpenShift playbook requires a working docker-engine installation on the target host. For better performance OpenShift recommends to use the overlay2 storage driver. This driver requires an XFS filesystem.

Docker installation steps:

# mkfs.xfs /dev/sdb1  # dedicated disk for docker in this setup
# mkdir /var/lib/docker
# echo '/dev/sdb1 /var/lib/docker xfs defaults 0 0' >> /etc/fstab
# mount -a
# yum install -y docker
# echo '{"storage-driver": "overlay2"}' > /etc/docker/daemon.json
# systemctl enable docker.service
# systemctl start docker.service
# docker ps  # make sure you can talk to the docker daemon

DNS setup

To benefit from OpenShift routing feature I defined a wildcard A entry in the pocentek.net DNS zone:

*.oc.pocentek.net. IN A 12.34.56.78

This allows dynamic resolution for all the application deployed on OpenShift, as long as they are routed using a matching domain name.

Playbook configuration

The OpenShift playbook requires only a few variables to be set to perform the installation. But a single node setup requires a few tweaks.

You first need to retrieve the code. I used the 3.6 version of OpenShift if this example:

$ git clone https://github.com/openshift/openshift-ansible.git
$ cd openshift-ansible
$ git checkout --track origin/release-3.6

All the settings are defined in an inventory file. I used the following inventory/hosts file:

[OSEv3:children]
masters
nodes
etcd

[masters]
op1.pocentek.net openshift_public_hostname="{{ inventory_hostname }}" openshift_hostname="{{ ansible_default_ipv4.address }}"

[etcd]
op1.pocentek.net

[nodes]
op1.pocentek.net openshift_node_labels="{'region': 'primary', 'zone': 'default'}" openshift_schedulable=true

[OSEv3:vars]
ansible_ssh_user=root
ansible_become=no

openshift_deployment_type=origin
openshift_release=v3.6

openshift_master_default_subdomain=oc.pocentek.net

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'gpocentek': 'some_htpasswd_encrypted_passwd'}

openshift_hosted_router_replicas=1
openshift_hosted_registry_replicas=1

openshift_router_selector='region=primary'
openshift_registry_selector='region=primary'

Some variables require a bit of explanation:

openshift_schedulable=true
By default a master node will be configured to be ignored by the OpenShift scheduler. Application containers will not be created on masters. Since we only have one node, the master should be configured to host application containers.
openshift_router_selector and openshift_registry_selector

Routers (which expose services to the outside world) and the docker registry both run as containers on one or several nodes of the OpenShift cluster. By default they run on infrastructure nodes: dedicated nodes hosting internal services. To make sure that these services are properly scheduled and started on the single node deployment we explicitly label the node (region: primay) and configure the router and registry selector to match this node.

We also make sure that only 1 container is scheduled for each service (openshift_hosted_{router,registry}_replicas).

openshift_master_htpasswd_users
In this setup htpasswd authentication is used, and a gpocentek user is created by the playbook. You can generate the encrypted password using the htpasswd tool.

The node can be deployed using:

$ ansible-playbook -i inventory/hosts playbooks/byo/config.yml

Note

Note: you can find sample inventories in inventory/byo/.

Storage

One feature I couldn't manage to deploy is persistent storage support. Since the deployment isn't meant for production, I used a NFS server deployed on the OpenShift machine to provide PVs:

for i in {1..9}; do
    mkdir -p /exports/volumes/vol0$i
    chown nfsnobody:nfsnobody /exports/volumes/vol0$i
    chmod 775 /exports/volumes/vol0$i
    echo "/exports/volumes/vol0$i *(rw,root_squash,no_wdelay)" >> /etc/exports

    cat | oc create -f - << EOF
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0$i
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /exports/volumes/vol0$i
        server: 172.17.0.1
      persistentVolumeReclaimPolicy: Recycle
    EOF
done

Containers using PVCs created using these PVs most define a custom securityContext:

securityContext:
  supplementalGroups: [65534]

References: https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_nfs.html#nfs-supplemental-groups

Comments !