How to deploy a Docker host with Tuono

This article describes how to build a Docker host with a single Tuono Blueprint. Using the same Blueprint, this can easily be extrapolated out to a whole Docker Swarm.

During a recent discussion with a customer at a software development company, they explained that they wanted to fully automate a Docker deployment and the required infrastructure in the public cloud. They had several Docker images, containing all the dependencies that they needed to work with a range of languages and frameworks. What they wanted to do was automate the deployment of the Docker infrastructure and pulling down the discrete container images based on the languages they wanted to work with.

This got me thinking…

Starting with the default Tuono “tutorial” Blueprint, we could easily modify this to become a Docker Blueprint using some standard Tuono functionality. We could automate the deployment of the correct Docker image from the container library based on the current requirement.

For this example, I created a Docker image from scratch, to demonstrate the principle. If you already have a few Docker images, you know how all this works and can make the appropriate changes in the “userdata” section, but if you don’t, you’ll have your own custom image at the end of this… the batteries are included.


Let’s dive straight in to the schema, and then we can talk about what it does.

# This is an example blueprint which demonstrates the
# creation of a Docker host with a public ip address.
# The machine can be accessed via:
# # ssh <admin_username>@<ip>
# And the Docker machine can be accessed via;
# # ssh <admin_username>@<ip> -p 8080
    description: The username for the administrative user.
    type: string
    default: adminuser
    description: The password for the Docker container
    type: string
    description: The OpenSSH Public Key to use for administrative access.
    type: string
    type: integer
    preset: true
    type: integer
    preset: true

      number_of_cores: 1
      memory_in_gb: 2
      number_of_cores: 2
      memory_in_gb: 1
      aws: eu-west-1
      azure: northeurope
      region: datacenter

      public: true
      network: testing
      firewall: only-secure-access
      public: true
        - port: 22
          proto: tcp
        - port: 443
          proto: tcp
        - port: 8080
          proto: tcp
        - protocols: secure
          to: self
      publisher: Canonical
      product: UbuntuServer
      sku: 18.04-LTS
          image_id: ami-06868ad5a3642e4d7
      cores: ((number_of_cores))
      memory: ((memory_in_gb)) GB
      image: bionic
          size: 64 GB
            tag: base_disk
            - private:
                type: dynamic
                type: static
          firewall: only-secure-access
          subnet: public
        wicked: cool

          username: ((admin_username))
          public_key: ((admin_public_key))

          type: shell
          content: |

            ## Configure admin_username on the host machine
            userid=$(id -u ((admin_username)))
            if [ -z "$userid" ]; then
                set -e
                adduser --gecos "" --disabled-password ((admin_username))
                cd ~((admin_username))
                mkdir .ssh
                chmod 700 .ssh
                echo "((admin_public_key))" > .ssh/authorized_keys
                chmod 600 .ssh/authorized_keys
                chown -R ((admin_username)).((admin_username)) .ssh
                usermod -aG sudo ((admin_username))
                echo "((admin_username))   ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
                set +e

            ## Update the repositories and upgrade
            sudo apt update
            sudo apt upgrade -y

            ## Install and configure Docker
            sudo apt install -y
            sudo usermod -aG docker (( admin_username ))
            sudo systemctl enable --now docker

            ## Configure the Docker instance
            cd /home/((admin_username))/dockerbuild
            mkdir dockerbuild

            ## Create Dockerfile and do some bootsrapping on the container machine
            echo "FROM ubuntu:20.04
            # Install the required dependencies
            RUN apt-get update && apt-get install -y openssh-server
            RUN mkdir /var/run/sshd
            # Add the admin_username
            RUN useradd ((admin_username))
            RUN echo '((admin_username)):((container_password))' | chpasswd
            # Fix some ENV issues
            ENV NOTVISIBLE='in users profile'
            RUN echo 'export VISIBLE=now' >> /etc/profile
            # Open up port 22
            EXPOSE 22
            # This is a workaround to start (and keep up) SSHd
            CMD service ssh start && while true; do sleep 3000; done" > dockerbuild/dockerfile

            ## Build and run the Docker image
            docker build -t dockerfile dockerbuild
            docker run -d -p 8080:22 -t -i dockerfile

As you can see, this is a fairly standard VM Blueprint. In this case, I decided to create a preset for Azure and AWS. The only reason for this is so that it’s possible to deploy it in the free-tier of AWS (2 CPU and 1 GB RAM, or t3.micro). This size has no cognate in Azure. In real life, you’d remove the presets altogether, set it to a sensible amount of CPU and memory and it will deploy to Azure and AWS just fine. The other non-standard part of the base template is the use of port 8080. This is the port that will be mapped to the Docker image – container port 22 in this case. With the explanation complete, let’s talk about the interesting bit.


For this example, it’s the “userdata” that does most of the heavy lifting. Userdata is interesting in that it allows you to use any shell-supported scripting language to bootstrap your infrastructure. It’s extremely useful for doing the initial infrastructure configuration (see the webservice example), or as in this case, for doing the initial configuration where state is not a concern. I should also add that cloud-init is fully supported too, but I specifically used bash in this example on the assumption that it may be more familiar to people if they wanted to extend this example.

So, what’s going on in the userdata? The first section is simply about configuring the admin_username that was defined as a variable. If we use userdata, it is expected that we do all the initial configuration in here, so we add the user, configure sudo and add them to the sudoers file and enable SSH key authentication. You can consider this section as boilerplate and should be present in some form in almost every case where you leverage userdata. From here, we:

  • Update the Ubuntu instance
  • Install Docker
  • Add our admin_username to the Docker group. This ensures that the containers are accessible to this user.

The next section is where we write out our Dockerfile. In this example we do only the very minimum to make it useful:

  • Update the Ubuntu container
  • Install OpenSSH
  • Add the admin_username
  • Update /etc/profile
  • Finally, start SSH at runtime using a workaround to keep it up and accessible.

If you want to take it for a test-drive:

To access the Docker host

# ssh <admin_username>@<ip>

And to access the container:

# ssh <admin_username>@<ip> -p 8080

All the IP details can be viewed by clicking the “Inventory” button on the Environment Screen.

If you want to go ahead and try this out, feel free to take Community Edition for a spin. It’s entirely free, with no credit card required, so it’s a good way to have some fun with this example. Stay tuned for more fun examples.

Deploy your first environment