Ansible control centre

This post describes one way to configure a computer to centrally manage an IT training lab using Ansible. The control centre computer could be physical or virtual, headless or not, a laptop, a desktop or anything else: the only real requirements are that the device can bring up a CLI (command line interface) in a terminal, that the vi text editor is installed, and that normal SSH infrastructure is in place.

One of Ansible’s greatest strengths is that is leverages the core Unix service SSH to secure all network communications. The use of SSH keys allow scriptable, secure, non-interactive logins to remote machines without passwords. In comparison to an attacker trying to guess a human-memorable password, brute forcing a 2048-bit SSH key is practically impossible. Systems that go the extra step and only allow SSH access with keys are considerably more secure than password authenticated systems.

Software

Assuming that we are starting with a clean “Minimal” install of CentOS 7.3, we start with updating the OS and installing the required software:

# yum update
# yum install epel-release ansible

If we’re on VMWare, also install Open VM Tools:

# yum install open-vm-tools

Next, we install python-pip which is in turn used to install several Python packages which provide support for management of ESX (pysphere), Windows (pywinrm) and Digital Ocean cloud (dopy) machines.

# yum install python2-pip
# pip install dopy
# pip install pysphere>=1.7
# pip install pywinrm>=0.2.2

Sysadmin User

Next, create a sysadmin user account. This user will own and run Ansible scripts, though often with elevated privileges via sudo or su on remote machines:

# useradd sysadmin
# passwd sysadmin

It is assumed that all Unix machines managed by Ansible will have their own corresponding sysadmin user account. This will simplify Ansible operations, avoid routine use of the root user account, and probably allow root access via SSH to be disabled altogether in the interests of better security.

SSH Keys

Next, become the sysadmin user (use su or login as that user) and generate an SSH public/private key pair:

$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/sysadmin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/sysadmin/.ssh/id_rsa.
Your public key has been saved in /home/sysadmin/.ssh/id_rsa.pub.
The key fingerprint is:
5f:f6:11:c2:78:b9:8f:4e:f6:7e:29:a6:55:4c:f8:f8 sysadmin@example.com
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|           o ..  |
|          . =... |
|           . o=. |
|        S   +..+ |
|         . o +o. |
|          . +.oE.|
|           +.+ ..|
|           .+.+. |
+-----------------+

While it is possible to leave the passphrase blank, this is not recommended. A compromised key with no passphrase would allow an attacker easy access to any machine which has authorised that key without even needing a password.

With a passphrase on the SSH key, it is convenient to set up ssh-agent and add the key at the beginning of each login session in order to avoid repeatedly typing the passphrase. Most SSH related programs like ssh-add will assume a default key path of ~/.ssh/id_rsa unless provided with an explicit alternative key path:

$ ssh-agent bash
$ ssh-add

The public SSH key in ~/.ssh/id_rsa.pub must be copied to /home/sysadmin/.ssh/authorized_keys (note US spelling) on each remote Unix machine that access is required to. This can be done manually using scp but it is far easier to use the ssh-copy-id tool:

$ ssh-copy-id sysadmin@remote.machine

SSH Keys on ESXi

For any VMWare ESXi hypervisors that are part of the IT lab ecosystem, setting up SSH keys is slightly more complicated. Because ESXi stores SSH keys an in-memory only filesystem, any keys added to a running machine do not survive a reboot. The procedure described here is Nick Charlton‘s and is almost certainly not approved of by VMWare but it works because the ESXi hypervisor is basically another form of Unix.

Firstly, you need a a sysadmin user on the host with Administrator privileges or a custom role with sufficient permissions to perform necessary tasks.

Next, SSH access needs to be enabled on the host.

Next, we get around VMWare’s in-memory filesystem limitations by adding a short script to /etc/rc.local.d/local.sh on the host, which rewrites the necessary SSH public key file after each reboot. Note the location and format of keys on ESXi:

#!/bin/sh

mkdir -p /etc/ssh/keys-<username>
echo "ssh-rsa AAAAB3..." > /etc/ssh/keys-<username>/authorized_keys
chmod 700 -R /etc/ssh/keys-<username> && chmod 600 /etc/ssh/keys-<username>/authorized_keys
chown -R <username> /etc/ssh/keys-<username>

exit 0

SSH and Digital Ocean

If using Digital Ocean as a cloud provider, SSH keys can be set up to enable creation and management of entire fleets of virtual servers via the Digital Ocean API.

First, upload a copy of the public key ~/.ssh/id_rsa.pub to your Digital Ocean account’s control panel. This can also be done using Ansible as described here.

Next, generate an API token in your Digital Ocean account’s control panel. The best way to store this token on the control centre computer is to export it as the environment variable DO_API_TOKEN the sysadmin user’s ~/.bash_profile file:

export DO_API_TOKEN="abc123............."

Of course, most if not all of this setup work can be automated by Ansible as new managed hosts are added to the network.

At this point we are ready to begin configuration of Ansible itself, which is the subject for another post.

References

Setting Up Passphrase-protected SSH Keys Without Repetitive Typing

Persistent SSH keys with ESX6

How To Use the DigitalOcean API v2 with Ansible 2.0 on Ubuntu 14.04