AWS Elastic Beanstalk or Azure App Service Part 1

AWS Elastic Beanstalk or Azure App Service

Lets have a look at two competing services, Amazon Elastic Beanstalk and Azure App Service. Both serve the same purpose but which one is the better pick? Normally we recommend one cloud platform as creating a multi cloud strategy can make your architecture overly complicated. But sometimes we may have to branch across multiple.

A High Level Overview

Lets start at the top and get deeper, what are some important points to consider?

Framework Support

Not much to comment on here as both services cover the following frameworks – .Net, .Net Core, Node, Java, PHP, Python, Ruby, Docker. With the additional .Net Core support for Azure App Service. This doesn’t provide extra benefit unless your websites or apps are working with the new framework.

CI/CD and DevOps Capability

Both are very good in this area, Azure App Service has Azure DevOps and AWS has DevOps developer services through the AWS console. Azure DevOps can help standardise your pipelines through a common interface and build definitions. AWS also provides a standard interface for setting up build, test, deploy but there is more work required when customising a pipeline. Whether using on prem Jenkins, Puppet deployments, on-premise to cloud workflows, mobile DevOps, Azure DevOps has the ability to use a build definition to manage all kinds of pipelines, and it offers great reusable templates and build steps for different frameworks.

Visual Studio & Visual Studio Code provide great integration to Azure DevOps.

Note

Now that Microsoft has acquired GitHub, we are going to see some sweet new integration between Azure DevOps and GitHub.

What does AWS offer with Elastic Beanstalk?

CodeCommit + CodeBuild + CodeDeploy + CodePipeline = AWS DevOps

AWS provides a nice set of DevOps tools through the console.

Amazon has a very powerful set of developer services to achieve the same pipelines. Through the AWS Console we have a set of developer services to manage build, test and deploy for any pipeline. Take a look at the following screenshot, this is CodeCommit, i.e. AWS source control. Here we create our repositories to push source code like any GitHub, BitBucket or VSTS repository. If you look on the left hand side, we also have access to the other AWS DevOps services:

In CodeCommit we create repositories that can be linked with IAM users.

Elastic Beanstalk can utilise these services through multiple DevOps solutions, here are some good examples. Some might argue the UI with AWS DevOps is simpler than Azure DevOps but CodeBuild doesn’t offer templated pipelines offered with Azure DevOps.

Let’s look at template pipelines with Azure DevOps. Through the use of build steps, Azure DevOps makes it easy to extend multiple frameworks into a single pipeline.

Get up and running fast with preconfigured pipeline templates with Azure DevOps.

Security

Azure App Service can integrate with Active Directory (AD), and Elastic Beanstalk can integrate with Identity Access Management (IAM). We can tie particular permissions to certain users for access control over each of these services. Its important we setup the necessary user permissions when running workloads in production, i.e. public facing websites, SSH access, port access, etc.

Networking

We can also bound these services to private networks. With Azure App Service we can isolate the service inside a virtual network using an App Service Environment (ASE). With AWS we can also isolate the Elastic Beanstalk service through a VPC. Networking is an important feature that must be implemented because using both ASE and VPC allows developers to layer firewall protocols to secure traffic and usage for apps to the public. These measures can also help prevent DDOS attacks.

The architecture above demonstrates an App Service hosting a web application secured in a virtual network with firewall rules to open a port for a public endpoint.

AWS architecture demonstrating the ability to isolate an Elastic Beanstalk application inside a VPC.

Let’s get Technical

Deploying a docker Container into Elastic Beanstalk

We’ve seen some high level items on these services, let’s get technical with Elastic Beanstalk and deploy a containerised python application.

Now we have two choices here.

  1. You can choose to retrieve the source code for the project from GitHub, create a repos on Docker Hub and containerise the application yourself.
  2. Or, simply grab the container from our repository and deploy into Elastic Beanstalk.

Let’s start with method 1.

Method 1

Our first step is to create a folder for our dev project. Open up your MAC terminal or PowerShell on Windows.

Note

For this walk through we are going to be using MAC OSX.

In terminal move to a location which is suitable for handing files for this project. We’ve simply created a new folder called elastic-beanstalk-docker-python

In our new folder in terminal, grab the repository from GitHub using the following command.



git init

git remote add origin git@github.com:flusharcade/elastic-beanstalk-docker-python.git

git fetch --all

git checkout master

Now that we have the source code, lets containerise it.



docker build .

docker tag {container-id} {repository-url}

docker push {repository-url}

The container id will printed out in the terminal after we build the container using



docker build .

It should look like this:



Successfully built {container-id}

The final command will push to your repository on Docker Hub. If you haven’t got an account, you can create one here. It’s free, and easy to setup a repository. Follow this walk through for setting up a public repository.

Our last step is to add the correct Docker Hub repository to our AWS JSON config file for the deployment. Open up the file in the source code called Dockerrun.aws.json. This file is our config deployment file which Elastic Beanstalk uses to deploy the container from Docker Hub.

Replace the{repository-url} text with your Docker Hub repository url and save the config file.


{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "{repository-url}",
    "Update": "true"
  },
  "Ports": [
    {
      "ContainerPort": "5000"
    }
  ],
  "Logging": "/var/log/nginx"
}

That’s all for method one, now we are ready to deploy to Elastic Beanstalk.

Method 2

We don’t have to do any pre configuration for the containerised app, we can just jump straight into the AWS console and deploy it. But let’s look at how we can run the container locally before we deploy.

We must first retrieve the python app container from Docker Hub. To do this visit this link and open up your MAC terminal or PowerShell on Windows.

Same as method 1, lets move to a folder location for handing files for this project –  elastic-beanstalk-docker-python. Then simply grab the docker command from the repository url:


docker pull flusharcade/elastic-beanstalk-docker-python

Once we punch this command into terminal it will start downloading all layers locally as seen below:

When the process is complete we can now run the container locally by the following:


docker run -d -p 5000:5000 flusharcade/elastic-beanstalk-docker-python:latest

We can actually just use the second command to retrieve and run the container. This container is actually an API which runs a http server with a simple GET endpoint that returns a string. We must expose port 5000 as this is a standard port for all python apps. To test the local running version simply visit localhost:5000 and you should see the following in the browser:

Very little effort required to run a containerised python app locally.

 

Elastic Beanstalk

Now to the fun part, Elastic Beanstalk. Open up the AWS console and visit Elastic Beanstalk under the Compute section. Once we select this you will be taken to the Elastic Beanstalk portal for deploying.

Look for the Create New Application button at the top right of the page.

On the next screen we simply add the name and click Create.

The structure of Elastic Beanstalk starts with an application followed by an environment. In Elastic Beanstalk we use environments, in Azure App Service we have deployment slots. There are some slight differences between the two but they achieve the same purpose.

Now to the Environments screen, here is where we will deploy our container. Simply select the Actions button then Create environment.

This is where Elastic Beanstalk is awesome, all we have to do is pass the JSON config file and AWS does the rest. First we want to select the Web Server Environment tier.

Then on the next page, we set the environment name

Versioning

All very basic stuff, then we simply set the preconfigured platform to Docker and upload the JSON config file in the Upload your code section. We also have to be specific with the version labels. Every time we redeploy to Elastic Beanstalk a unique version label must be specified. Even if we have previously deleted environments, you won’t be able to use the same version label.

Note

We also have the option to upload source code locally for Elastic Beanstalk application but it can be very picky about folder structure. One time we spent hours just getting the file structure correct so make sure you are diligent when reading this tutorial.

Now deploy and watch:

When the deployment completes you will be given a public url for the live python application.

AWS Beanstalk environments can fall over quite easy if not configured correctly. In order to configure auto scaling, networking, monitoring, visit the Configuration section.

We will go through this in detail in Part 2.

Note

With our recent project running day trading algorithms on AWS, we’ve seen some limits of Elastic Beanstalk. But thats because we hadn’t configured scaling correctly. ALWAYS REMEMBER – UNDERSTAND YOUR SERVICE! You don’t want to misconfigure the service or pay and consume resources that are not needed.

Deploying to Azure App Service

There are a few methods for deploying to Azure App Service and we are going to show two of them:

  1. Deploy via an Azure Container Registry
  2. Deploy via Docker Hub

Using Azure CLI to Deploy an Azure Container Registry

First we must create a Container Registry on Azure. We can do this either through the CLI or via the portal. To create a Container Registry using the Azure CLI, first create a resource group:

az group create --name elastic-beanstalk-docker-python --location australiaeast

We can update the name and location with something of your choice. Then we must deploy theContainer Registry using the following:

az acr create --resource-group elastic-beanstalk-docker-python --name elasticbeanstalkdockerpython --sku Basic 

Done.

Deploy an Azure Container Registry via the Azure Portal

We also have the option to deploy via the Azure portal. Lets just into the portal and select Create a Resource from the left panel, and type Container Registry and fill out the following details:

Thats all. It should deploy fairly quickly.

Well done, we now have an Azure Container Registry to submit container images. A Container Registry works exactly like repositories on Docker Hub. We also have the ability to create private and public registries.

Now we must push a container image to the repository. Jump into your MAC terminal and login to your registry by the following command:

docker login myregistry.azurecr.io -u xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -p myPassword

A warning may appear using the above command:

WARNING! Using –password via the CLI is insecure. Use –password-stdin.

This is because its unsafe to place a password visible with a terminal command because it allows anyone to scroll back through command history and see the password.

To avoid this we can do the following:

cat ~/my_password.txt | docker login --username foo --password-stdin

Place the password inside the .txt file.

Note

Keep in mind this is still not very secure but its more secure than placing the password with the command.

To obtain your registry credentials and path, visit your Container Registry service through the Azure portal and from the left panel select Access Keys.

Here copy the login server, username and one of the passwords at the bottom.

Once we run the above docker command, we can now push to this registry. But first we must tag the latest container using the following:

Docker tag {container-image-id} elasticbeanstalkdockerpython.azurecr.io/latest 

Since we have updated the registry location we must retag the id to push to the correct location. Exactly the same as setting remote locations in a Git repository.

Now let’s push to the registry:

docker push elasticbeanstalkdockerpython.azurecr.io/latest

Deploying to App Service

Since we have a container image sitting on two different repository locations, we can choose either location to deploy directly to App Service. Lets spin up a new web app inside the same resource group as the Container Registry.

Make sure you select Web App for Containers:

In the Configure Container section, here is where we can specify the location of the container image to deploy into the web app. In the screenshot below we are specifying our container image from our container registry we deployed earlier. We don’t need to set any startup file as our Dockerfile specifies these details.

Et Voila! We should now be able to visit the public link the container is running.

So we’ve seen the difference with deploying a containerised python application in each service. Next time, we will explore the configuration of these services and how each can outlast autoscaling and DDOS attacks.

Posted in Blog, Learn