Provision Tor Relays With Ansible

You’ve already seen how to Run A Tor Relay On Ubuntu Trusty. It’s a task that, while straightforward, still requires you to follow a ton of steps to have everything up and running correctly.

If you only have one relay to set up, executing things by hand is fine. The moment you have more than one, it becomes too tedious. I’ve looked at configuration management tools like Chef and Puppet in the past, but their learning curve was pretty steep and I just didn’t have the patience.

I had heard good things about another tool called Ansible, so I decided to try and automate the creation of relays with it. I was pretty impressed with it!

(If you’d rather not read all this text, and want to check out the example directly, head to unindented/provision-tor-relays-with-ansible.)

Setup

I think the easiest way to test our playbooks is to run them against a virtual machine. Download and install VirtualBox, Vagrant and Ansible if you don’t have them already on your system.

Before we start, let’s check which versions we are running:

$ VBoxHeadless --version
Oracle VM VirtualBox Headless Interface 4.3.16
$ vagrant --version
Vagrant 1.6.5
$ ansible --version
ansible 1.8.2

Cool. Now we’ll create a folder for our project. It will contain, at the root level, a Vagrantfile with instructions to create our boxes, and a provisioning folder with everything needed to provision them:

.
|-- provisioning
|   `-- playbook.yml
`-- Vagrantfile

Our Vagrantfile will look like this:

VAGRANTFILE_API_VERSION = '2'

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = 'ubuntu/trusty64'

  config.vm.define 'relay1'
  config.vm.define 'relay2'

  config.vm.provision 'ansible' do |ansible|
    ansible.playbook = 'provisioning/playbook.yml'
    ansible.groups = {
      'relays' => ['relay1', 'relay2']
    }
  end
end

We are defining two boxes, relay1 and relay2, which will be running Ubuntu Trusty. They’ll be provisioned through our playbook provisioning/playbook.yml.

For now, the file playbook.yml will contain a single task that executes a ping on the boxes:

- hosts: all
  sudo: yes

  tasks:
    - name: check that the box is alive
      action: ping

We can check that this basic setup is working by running vagrant up:

$ vagrant up
Bringing machine 'relay1' up with 'virtualbox' provider...
Bringing machine 'relay2' up with 'virtualbox' provider...
==> relay1: Importing base box 'ubuntu/trusty64'...
==> ...
==> relay1: Running provisioner: ansible...

PLAY [all] ******************************************************************** 

GATHERING FACTS *************************************************************** 
ok: [relay1]

TASK: [check that the server is alive] **************************************** 
ok: [relay1]

PLAY RECAP ******************************************************************** 
relay1                     : ok=2    changed=0    unreachable=0    failed=0   

==> relay2: Importing base box 'ubuntu/trusty64'...
==> ...
==> relay2: Running provisioner: ansible...

PLAY [all] ******************************************************************** 

GATHERING FACTS *************************************************************** 
ok: [relay2]

TASK: [check that the server is alive] **************************************** 
ok: [relay2]

PLAY RECAP ******************************************************************** 
relay2                     : ok=2    changed=0    unreachable=0    failed=0   

Looking good.

Structure

We’ll set up our folder structure to follow best practices:

.
|-- provisioning
|   |-- group_vars
|   |   `-- relays   # variables used by relays
|   |-- roles
|   |   |-- common   # role shared by all hosts
|   |   `-- relay    # role for relay hosts
|   |-- playbook.yml
|   `-- relays.yml
`-- Vagrantfile

The file playbook.yml will function as the top-level playbook. It is usually composed of many other playbooks, one for each type of host we’ll be dealing with. In our case, we’ll only have relays, so it will look like this:

- include: relays.yml

The file relays.yml declares which roles will be applied to our relay hosts:

- hosts: relays
  sudo: yes

  roles:
    - common
    - relay

Let’s create those roles.

Common role

Our common role applies to all our hosts, and will just take care of installing OpenNTPD.

Installing OpenNTPD

In the original article, we installed it by running:

$ sudo apt-get install openntpd

Ansible has an apt module that takes care of managing packages. We just need to specify openntpd as the value for the name parameter, and either present or latest for the state parameter.

We can then check that the service is started, and that it is set to run on boot, with the state and enabled parameters of the service module:

So, in order to tell Ansible that all hosts should install OpenNTPD, we’ll create the following main.yml file inside the provisioning/roles/common/tasks folder:

- name: register lsb release
  command: lsb_release -cs
  register: lsb_release

- name: ensure openntpd is at the latest version
  apt: name=openntpd state=latest

- name: ensure openntpd is started and enabled
  service: name=openntpd state=started enabled=yes

The first task stores the output of lsb_release -cs under the lsb_release variable (we’ll need it later). The other two do the actual installing.

We’ll also create a handler for OpenNTPD under provisioning/roles/common/handlers/main.yml, in case some other task wants to restart the service:

- name: restart openntpd
  service: name=openntpd state=restarted

Relay role

Our relay role applies to hosts that will function as relays. It will install all the necessary packages to get Tor running, and will also provide the corresponding configuration.

Installing Tor

As we saw in the original article, before installing Tor, we needed to add the GPG key used to sign its packages. Ansible’s apt module doesn’t deal with keys (at the time of writing), so we’ll download it manually to a file named torproject.gpg, and store it in the provisioning/relay/files folder:

$ gpg --keyserver keys.gnupg.net --recv 886DDD89
$ gpg --armor --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 > torproject.gpg

Next, we’ll create the following main.yml file inside the provisioning/roles/relay/tasks folder:

- name: install torproject apt signing gpg key
  apt_key: >
    data="{{ lookup('file', 'torproject.gpg') }}"
    state=present

- name: add torproject to sources list
  apt_repository: >
    repo='deb http://deb.torproject.org/torproject.org {{ lsb_release.stdout }} main'
    state=present
    update_cache=yes

- name: install tor, arm and keyring packages
  apt: name="{{ item }}" state=latest
  with_items:
    - tor
    - tor-arm
    - deb.torproject.org-keyring

These tasks will install the GPG key, add the necessary repo to our sources list (using the lsb_release variable we had stored in the previous step), and install the tor, tor-arm and deb.torproject.org-keyring packages.

As we did with OpenNTPD, we’ll also create a handler for Tor in provisioning/roles/relay/handlers/main.yml, in case some other task wants to reload or restart it:

- name: reload tor
  service: name=tor state=reloaded

- name: restart tor
  service: name=tor state=restarted

Configuring Tor

In order to configure Tor, instead of providing a file with hardcoded values, we’ll create the following Jinja2 template torrc.j2 under provisioning/relay/templates:

ORPort {{ tor_or_port }}
DirPort {{ tor_dir_port }}
ExitPolicy {{ tor_exit_policy }}

Nickname {{ inventory_hostname_short }}
RelayBandwidthRate {{ tor_relay_bandwidth_rate }}
RelayBandwidthBurst {{ tor_relay_bandwidth_burst }}

AccountingStart {{ tor_accounting_start }}
AccountingMax {{ tor_accounting_max }}

DisableDebuggerAttachment 0

I’ve defined the default values for these variables in group_vars/relays:

tor_or_port: 9001
tor_dir_port: 9030
tor_exit_policy: reject *:*
tor_relay_bandwidth_rate: 1 MB
tor_relay_bandwidth_burst: 2 MB
tor_accounting_start: month 1 00:00
tor_accounting_max: 100 GB

Now we’ll need to add a new task to our provisioning/roles/relay/main.yml file, so that the configuration ends up at the correct location:

- name: configure tor
  template: >
    src=torrc.j2 dest=/etc/tor/torrc
    owner=root group=root mode=0644
  notify: restart tor

Notice we issue a notification to the handler so that the service is restarted after the configuration is changed.

Reload

If we now re-provision our boxes, we’ll run all our newly created tasks:

$ vagrant reload --provision
...

TASK: [relay | install torproject apt signing gpg key] ************************
changed: [relay1]

TASK: [relay | add torproject to sources list] ********************************
changed: [relay1]

TASK: [relay | install tor, arm and keyring packages] *************************
changed: [relay1] => (item=tor,tor-arm,deb.torproject.org-keyring)

TASK: [relay | configure tor] *************************************************
changed: [relay1]

NOTIFIED: [relay | restart tor] ***********************************************
changed: [relay1]

...

We can ssh into one of the boxes to check that everything is running as expected:

$ vagrant ssh relay1
vagrant@vagrant-ubuntu-trusty-64:~$ sudo tail -1 /var/log/tor/log
Dec 08 08:48:44.000 [notice] Now checking whether ORPort 5.80.255.23:9001 and DirPort 5.80.255.23:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)

Awesome!

Bonus round: Unattended role

I want these hosts to always be up-to-date, so we’ll create a new unattended role for them, which will install and configure unattended-upgrades:

- hosts: relays
  sudo: yes

  roles:
    - common
    - unattended
    - relay

Installing unattended upgrades

To install the unattended-upgrades package, we’ll create the following main.yml file inside the provisioning/roles/unattended/tasks folder:

- name: ensure unattended-upgrades is at the latest version
  apt: name=unattended-upgrades state=latest

Configuring unattended upgrades

To configure unattended-upgrades, we need to create two files: /etc/apt/apt.conf.d/10periodic and /etc/apt/apt.conf.d/50unattended-upgrades.

For the first one, we’ll create a new task in provisioning/roles/unattended/tasks/main.yml:

- name: create periodic configuration
  template: >
    src=periodic.j2 dest=/etc/apt/apt.conf.d/10periodic
    owner=root group=root mode=0644

Its corresponding template provisioning/roles/unattended/templates/periodic.j2 will look like this:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

For the second file, we’ll add a task to provisioning/roles/unattended/tasks/main.yml:

- name: create unattended-upgrades configuration
  template: >
    src=upgrades.j2 dest=/etc/apt/apt.conf.d/50unattended-upgrades
    owner=root group=root mode=0644

And its corresponding template provisioning/roles/unattended/templates/upgrades.j2 will look like this:

Unattended-Upgrade::Allowed-Origins {
{% for origin in unattended_allowed_origins %}
  "${distro_id}:${distro_codename}-{{ origin }}";
{% endfor %}
};

Unattended-Upgrade::Package-Blacklist {
{% for package in unattended_package_blacklist %}
  "{{package}}";
{% endfor %}
};

Unattended-Upgrade::Automatic-Reboot "true";

We’ll add two more variables to provisioning/group_vars/relays:

unattended_allowed_origins: [security, updates]
unattended_package_blacklist: []

And, with that, we’ll be done. Let’s re-provision:

$ vagrant reload --provision
...

TASK: [unattended | ensure unattended-upgrades is at the latest version] ******
ok: [relay1]

TASK: [unattended | create periodic configuration] ****************************
changed: [relay1]

TASK: [unattended | create unattended-upgrades configuration] *****************
changed: [relay1]

...

Once you learn Ansible’s conventions, things are pretty straightforward.

If you run into trouble, compare your solution with mine at unindented/provision-tor-relays-with-ansible.