In this article we are going to walk through how to configure an AWS Application Load Balancer using the AWS console and compare it to deploying an Application Load Balancer (ALB) with Infrastructure as Code in a Tuono Blueprint.
We are going to use concepts from our AWS automation quickstart series on configuring an NGINX server as a base environment for building our AWS Application Load Balancer. The primary difference from that stand-alone guide is that now we will configure NGINX to listen on port 8080 as opposed to port 80. We can do that with a few small modifications using the
runcmd in a cloud-init script when deploying the web server Instance.
#cloud-config package_upgrade: false packages: - nginx users: - name: adminuser groups: - sudo sudo: ALL=(ALL) NOPASSWD:ALL ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDummyDu= firstname.lastname@example.org runcmd: - sudo su - echo 'Congratulations on configuring an AWS web server! ' > /var/www/html/index.nginx-debian.html - sed -i 's/listen 80 default_server;/listen 8080 default_server;/' /etc/nginx/sites-enabled/default - sed -i 's/listen \[\:\:\]\:80 default_server;/listen \[\:\:\]\:8080 default_server;/' /etc/nginx/sites-enabled/default - systemctl restart nginx
The existing subnet NACL and security group are also modified to allow HTTP(8080) inbound.
Our starting point is a resource group containing Virtual Private Cloud (VPC) components with a custom NACL and our manually configured NGINX server Instance listening on port 8080 with properly configured NACL and security groups.
Webserver Subnet NACL allowing inbound 8080
Webserver Security group allowing inbound 8080
How do I configure subnets for an AWS load balancer?
An AWS Application Load Balancer requires at least 2 subnets that reside in unique availability zones for each target we will configure. We will start by creating two additional subnets in the VPC we created by navigating to the VPC Subnets page and clicking “Create subnet”.
Let’s ensure we select the VPC we previously created, give our first subnet a name and select our first Availability Zone. Tag each subnet so we can keep track of our infrastructure in our Resource Group with a Key
We will provide each subnet with a CIDR block that will give a /28 range to work with for each zone within our VPC. Provide an IPv4 CIDR block of 10.0.1.0/28 for the first subnet.
Repeat those steps to create a second subnet with a different Availability Zone and provide an IP range of 10.0.1.16/28.
How do I associate subnets with an Internet Gateway?
Subnets used by a public internet-facing load balancer need an Internet Gateway attached to connect to the internet. We attach each subnet by associating a Route Table that targets the VPC Internet Gateway.
On the ‘Route Table’ tab for each load balancer subnet we created click “Edit route table association”. You can also manage your associations with the subnet “Actions” dropdown.
Select the Route Table that contains the VPC Internet Gateway as a Target and click “Save”.
How to create secure Access Control Lists for an Application Load Balancer?
The load balancer subnets we create use a default Network Access Control List (NACL) that allows all traffic in. We can secure our load balancer subnet by creating a custom NACL like we previously did for our web server Instance in part 2.
From the VPC ‘Network ACLs’ dashboard click “Create network ACL”. Provide the load balancer ACL with a name and select our existing VPC then “Create”.
Now edit the inbound rules on the NACL to only allow inbound from port 80, ephemeral ports, our VPC network and deny all other traffic.
|Rule #||Type||Protocol||Port Range||Source||Allow/Deny|
|1||HTTP (80)||TCP (6)||80||0.0.0.0/0||ALLOW|
|2||Custom TCP Rule||TCP (6)||1024-65535||0.0.0.0/0||ALLOW|
An audit of the load balancer subnet NACL displays the following inbound ports.
Next, navigate to the Outbound Rules tab and click “Edit outbound rules”. We are not going to worry about locking down outbound traffic from our load balancer in this subnet so let’s create a rule 100 that allows all outbound.
Now associate this customer load balancer NACL with our two subnets. From the NACLS ‘Subnet associations’ tab click “Edit subnet associations”
Finally, select the two load balancer subnets and click “Edit”.
How do I deploy an Application Load Balancer?
From the EC2 dashboard navigate to ‘Load Balancers’ and click “Create Load Balancer”.
For our purposes of passing our HTTP web server through a load balancer, we are going to create an “Application Load Balancer”.
Give the load nalancer a name and keep the default port 80 listener. The load balancer will be listening for traffic on port 80 and forward that traffic to a target we are going to configure.
In the ‘Availability Zones’ section select our manually created VPC. This will provide you with a selection of availability zones and subnets associated with the VPC.
Select two availability zones and each subnet we created specifically for the load balancer.
Tag the load balancer with a Key
walkthrough and Value
webserver then click “NEXT: Configure Security Settings”
The ‘Configure Security Settings’ page warns us that we should really be using HTTPS for secure connections. For this tutorial, we are sticking with HTTP and will leave HTTPS and AWS certificates for another day. Let’s move on and click “Next: Configure Security Groups”
We will create a new load balancer Security Group that only allows port 80. Under ‘Assign a security group” select “Create a new security group”, change the Type to HTTP and ensure the Port Range is port 80 then click “Next: Configure Routing”.
The ‘Configure Routing’ page lets us configure where we will be sending traffic coming in from our listener port (80). Provide the route a name and keep the Target type set to Instance. Change the port to 8080, which our target web server is already configured to listen on, then select “Next: Register Targets”.
The Register Targets page allows us to select a list of instances that we can register as targets for our load balancer. Select the Instance running our web server listening on port 8080 and click “Add to registered”.
Now that the target is added to the Registered targets list on port 80 be sure to select it and click “Next: Review”. Registering the target an easy step to miss in this two part process!
Review the configuration we walked through and click “Create”
What does the Application Load Balancer wizard create in AWS?
A load balancer creates a variety of objects that will not be deleted when you delete a load balancer. To avoid surprise costs we will continue our best practice of tagging created objects so we can track them in our resource group.
A load balancer Security Group is created with an inbound allow rule of HTTP port 80 that firewalls traffic into our load balancer. You can tag the security group with a Key
A load balancing Target Group is created with a route to port 8080 and registers our web server Instance. We know the target is good when the status is “healthy”. You can tag the target group with a Key
Public IPv4 interfaces are created for the load balancer in each zone. You can find them on the EC2 Network & security page under Network Interfaces.
Finally, the load balancer itself! From the load balancer, you can get the DNS name which we will use to see the welcome page of our NGINX server. You can also get additional details and view the Listener that we configured in the wizard.
Open up your favorite web browser and enter the load balancer DNS name to get our custom message. We are connecting to the load balancer listening on port 80 which is forwarding the traffic to our web server listening on port 8080.
Our resource group contains the following objects that we have manually tagged in AWS.
Tuono eliminates the manual steps with concepts you already understand.
Writing infrastructure as code does not have to be complicated. Managing security and inventory deployed into AWS also does not have to be a long manual process. Tuono can help remove all of this complexity.
When we previously walked through how to create an AWS web server we introduced networking, security and how to deploy and customize a VM in a Tuono Blueprint. To deploy an application load balancer we will continue to grow those concepts with the introduction of a few new objects.
A service is defined to declare traffic flow between our resources. In this example, we are creating an external service for HTTP over port 80 and an internal service for HTTP over port 8080. Our defined services can now be used in our firewall, VM and load balancer objects.
service: external-http: port: 80 protocol: http internal-http: # traffic for the web service internally port: 8080 protocol: http
We modify our firewall to use the internal service that opens port 8080.
firewall: fw-internal-access: rules: - services: internal-http to: self - protocols: ssh to: self
Then we provide the internal service to our VM object which allows traffic to flow at the Instance security group level.
vm: webserver-var: nics: provides: internal-http
A Tuono balancer object defines a name for our application load balancer, the VPC it will connect to and utilizes our created services to define a listener on external port 80 to the internal target port 8080 where our NGINX server is listening.
This simple code block creates our application load balancer while configuring the listener and registering the target along with the HTTP settings, tagging, and security rules!
balancer: balancer-walkthrough: network: vnet-walkthrough scope: public routes: - from: external-http to: internal-http
Your Resource group now has the following objects complete with an application load balancer serving NGINX content. As you can tell, this is another instance where Tuono IaC makes a multi-step AWS process simple, makes it secure, and allows for proper resource group tracking!
The Blueprint to deploy this entire infrastructure is outlined below:
# # This is an example blueprint that demonstrates the creation of a webservice through an Application Load Balancer # --- variables: admin_username: description: The username for the administrative user. type: string default: adminuser admin_public_key: description: The OpenSSH Public Key to use for administrative access. type: string your_caption: description: Web server message type: string default: "Congratulations on configuring an AWS web server!" location: region: my-region: country: USA area: northwest folder: aws-walkthrough: region: my-region networking: network: vnet-walkthrough: range: 10.0.0.0/16 scope: public subnet: subnet-walkthrough: range: 10.0.0.0/24 network: vnet-walkthrough firewall: fw-internal-access scope: public firewall: fw-internal-access: rules: - services: internal-http to: self - protocols: ssh to: self protocol: ssh: ports: - port: 22 proto: tcp ##Part 5 adds a load balancer that will use the created services to direct port 80 traffic to port 8080 on the webserver VM balancer: balancer-walkthrough: network: vnet-walkthrough scope: public routes: - from: external-http to: internal-http service: external-http: port: 80 protocol: http internal-http: # traffic for the web service internally port: 8080 protocol: http compute: image: bionic: publisher: Canonical product: UbuntuServer sku: 18.04-LTS venue: aws: # if provisioning fails due to image not found, go to: # https://cloud-images.ubuntu.com/locator/ec2/ # and search for "bionic amd64 ebs" and also add your AWS zone name like "us-west-2" image_id: ami-04bb0cc469b2b81cc vm: webserver-var: cores: 1 memory: 1 GB image: bionic nics: internal: ips: - private: type: dynamic public: type: static firewall: fw-internal-access subnet: subnet-walkthrough provides: internal-http # Attaches the internal service we created for port 80 configure: admin: username: (( admin_username )) public_key: (( admin_public_key )) userdata: type: cloud-init content: | #cloud-config package_upgrade: false packages: - nginx users: - name: (( admin_username )) groups: - sudo sudo: ALL=(ALL) NOPASSWD:ALL ssh_authorized_keys: - (( admin_public_key )) runcmd: - sudo su - echo '(( your_caption ))' > /var/www/html/index.nginx-debian.html - sed -i 's/listen 80 default_server;/listen 8080 default_server;/' /etc/nginx/sites-enabled/default - sed -i 's/listen \[\:\:\]\:80 default_server;/listen \[\:\:\]\:8080 default_server;/' /etc/nginx/sites-enabled/default - systemctl restart nginx