In Tuono, we got bored of constantly writing the words “cloud provider”, so we stopped. In Tuono, the venue refers to the endpoint (cloud, erm… provider) to which you deploy your infrastructure. Currently supported venues are Microsoft Azure and Amazon Web Services. You can think of these as the “where”.
NOTE: Venue refers strictly to the platform. The exact datacenter within the platform (e.g. northeurope (Azure) or us-west-2 (AWS)) is a property of the Blueprint under the “location” object definition.
Tuono Credentials refer to the specific Service Account used to consume cloud resources. More specifically, it represents the service account, the subscription, and the secret key from the chosen venue. In Tuono, this is securely stored as a property of the organization and any user can leverage these credentials to deploy infrastructure to the chosen venue.
We support three credential schemas, each with their own benefits and use cases.
- Static: This schema can be thought of as a global permission schema. An explicit Service Account is created and this is used for all operations. This is similar to the traditional Service Account construct used in most ecosystems.
- Dynamic: This schema uses your vault to create a one-time credential, used on a per-task basis. It is time bound – it expires rapidly – and can be considered a JIT generated credential, created to complete a single task. This method is considerably slower than using Static credentials as this JIT generation takes time, and in Microsoft Azure for example, requires less restrictive permissions in order to be able to leverage this schema.
- Short Term: This schema leverages short-term (would you believe it?) credentials with defined expiry dates. Functionally, the implementation in AWS and Azure is similar. In terms of configuration, AWS leverages a persistent token to provide access, where as with Azure, you will be prompted to provide your SSO credentials when you do any operation, generating a session token as required. This is the ‘who’.
Setting up credentials can be one of the trickier things for a new user to configure, as the service account needs a very specific permissions to allow programmatic (API) access. Suffice to say, it’s not hard to do and only needs to be done once; it’s just that there are a few steps that need to be completed.
If you would like to understand how to do this, you can see our KBs below:
In Tuono, the Environment is the object that comprises a specific combination of Blueprint(s), venue, and credentials. Unlike the more abstracted concepts such as Blueprints, the Environment is a representation of exactly what should be deployed. For example, an Environment may contain:
- A webserver Blueprint, that comprises:
- A single NGINIX front-end
- Two back-end VMs
- A public IP
- A PROD network
- An instruction to deploy the application to the Azure datacentre “Europe North”
- An instruction to use specific venue credentials
If Blueprints are abstract infrastructure definitions, the Environment is the object that is instantiated. While it is possible to have an Environment that is correctly configured, but not instantiated (applied), you can still consider the Environment to be broadly similar to the inventory in other platforms. You can think of this as the ‘why’, i.e. this is a webserver application that forms the core front-end for my new Super Whizz Bang Tool. It needs to be deployed to the Azure Europe North datacenter, using super_secret creds.
As already mentioned, the Environment represents the complete definition of the infrastructure. It is here that multiple Blueprints can be combined to create a valid piece of infrastructure. It may be the case everything is contained within a single Blueprint, or there may be Blueprints with references in one Blueprint, but declarations in another. We expect all the correct parts to be present, but we are not prescriptive in how this is done. A good example of this may be a VM that references a network, which is itself declared in a separate network Blueprint.
When the infrastructure is “applied”, all the Blueprints within the Environment are compiled into a single master file, so there is no difference in the end result, regardless of how the infrastructure objects are defined. While any issues with the Blueprint(s) will be picked up when applied, it is also possible to quickly test the validity using the “compile” operation.
When your Environment includes a Blueprint with defined variables, these are presented in the Environment screen. From here, these values can be input (or defaults taken where they exist), and these values are compiled into the master Blueprint.
If you want to see this in action, you can use the “Compile” button, and you will see the compiled Blueprint, including all of the variable arguments substituted into the Blueprint.
In the Environment view, the details for each job attempted within the Environment are recorded. This will show both successful and failed jobs, and you can click on the job itself to see a detailed breakdown of the status of each task required to instantiate the Environment.
When you click the job itself, you are presented with the overview screen. This gives you a quick traffic light view of the status of each task.
If required, you can click on the Details tab, and this will give you a detailed description of each sub-task and significantly, it will display any terminal errors at the end.
Are you are interested in deploying some real infrastructure? Sign up for our free Community Edition by hitting the “Get Started” button and give it a go for yourself.