Cloud-native Compute on Azure

Brendan Thompson • 12 November 2021 • 9 min read

Microsoft Azure as a Public Cloud offers many incredible services to companies of varying sizes and complexities. As organizations navigate their digital transformations migrating off of on-premise compute and into the cloud, many are looking to adopt Cloud-native approaches to their applications and services. Azure offers many options for Cloud-native; in this post, we will go through the options that fall into the category of Cloud-native Compute. Cloud-native Compute is a term to describe compute options on a given Public Cloud that allows for the hosting/deployment of Cloud-native applications and/or services.

The Cloud Native Computing Foundation (CNCF) defines Cloud-native as:

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

Thus Cloud-native Compute is any infrastructure/PaaS/SaaS service that facilitates the above. I split these into two subcategories; Containers and Serverless. If we were to distil each of those subcategories into the thing that they best achieve for us Containers offer us increased flexibility, and Serverless offers us increased velocity.

Containers#

Containers allow us to build, ship, and run any app, anywhere! Containers are agnostic of the underlying infrastructure, except for OS, Linux vs. Windows. Using containers or container solutions such as the ones discussed below allows engineers to run the same code in the same way on their local machine into the production environment.

Containerization is the process of packaging the software code with all its required components, such as; libraries, frameworks, and any other dependencies into a discrete and isolated "container", this can then be deployed onto any of the below services.

AKS offers us the ability to use the most popular and widely consumed Container orchestrator to date. This enables us to deploy containers on any In a way that engineers are familiar with, Public Cloud does not require them to have specific knowledge of the underlying cloud. Azure exposes the Kubernetes API to the consumer via the Hosted Control Plane (HCP) which facilitates the normal Kubernetes experience. This HCP can be accessed via either a public endpoint or a private RFC-1918 endpoint, depending on how the cluster is deployed.

Kubernetes RBAC can manage access management on AKS with or without integration to Azure Active Directory, this is incredibly powerful, especially when the cluster(s) are deployed within an organization that has strict access requirements. Due to the ability to deploy multiple node pools, it is possible to have an AKS cluster with both a Windows and Linux node pool which is fairly unique when looking through the other Cloud-native Compute options on Azure.

ACI is an extremely lightweight and flexible way to deploy your containers into an environment and have them accessible to your consumers, publicly or internally, in a matter of minutes. Using ACI lets, you run containers without the overhead of managing any infrastructure or thinking about typical operations problems like patching servers. It is ideal for small organizations looking to kick-start their container journey or larger organizations trying to flex up their container fleets without the overhead of AKS or OCP.

A potential downfall for ACI is that it does not come with any container orchestration; it is simply a service to get started running containers in an isolated manner. It is perfect for scenarios where the container can run completely isolated, like a build task, asynchronous processing, or task automation. I have personally used it to host extremely simple API's in a low-cost way. Containers hosted on ACI can directly mount and access storage from Azure File Services, which is useful in many scenarios. As most of these Azure Services, there is a split between Windows and Linux based.

OCP is Red Hat's variant of Kubernetes, what it offers over a service such as AKS is the ability to install OCP in your organizations Data Centre which enables an easier migration pathway into the cloud. It also offers other excellent services such as; built-in CI/CD pipelines, CodeReady Workspaces enabling rapid development and a user-friendly web interface for operators to consume.

Microsoft has created a strategic partnership with Red Hat to help supply a best of breed experience for OCP on Azure. This means extremely tight integration with Azure services like networking and load balancing, which are key in a container orchestration platform.

Azure Container Apps (ACA)

Public Preview

ACA was recently announced at Microsoft Ignite 2021, and you can read my take on that here. The truly fantastic thing about ACA is that it gives you the scalability and ease of use of ACI with some of the sheer power of AKS, all without having to manage a Kubernetes cluster. This means that as consumers, we can construct complex applications and services in a straightforward and agile way whilst still grouping things how you would traditionally with Kubernetes.

ACA also allows for easily implementing traffic direction strategies like A/B testing and BlueGreen deployments, facilitated through Revisions. This means that you're able to have multiple versions of your service active (and accessible) at any given time. Using the concept of Environments ACA allows you to host multiple Container Apps that can communicate and interact with each other; this is truly where the power of ACA comes into play.

Serverless#

Serverless is typically referred to as a development or execution model wherein the cloud provider allocates the required resources to operate the consumer's application(s) or service(s). On the primary Public Cloud providers, we would often see this in services like the below.

Serverless allows engineers to get up and running with minimal to zero knowledge of either cloud or infrastructure, meaning that they can deliver features and resolve bugs faster than ever. This development style also lends itself to services that are completely decoupled from other services or applications and services that perform batch or event-driven processing.

The cloud provider usually manages the scaling of the resources on the engineer's behalf although there is still some configuration around when to scale.

Function apps support most popular languages out of the box and a custom handler can be used if the language you want to use is not currently supported. FA's can be used in myriad scenarios with some of the common trigger types being:

  • HTTP and webhooks
  • Event Grid & Event Hub
  • Timer
  • Blob Storage
This means that your functions are capable of catering to many of your engineering needs. However, an important thing to note is that Functions have a limited runtime depending on which plan and SKU it is on.
In many cases, function Apps are a one-stop-shop, as you don't just get resources to run your assets on; you also get built-in monitoring and logging tools, and flexible deployment options.

Logic Apps are honestly my least favourite option for serverless as I believe they open the door to some bad practices; I hope to delve further into that in another blog post. However, they allow the creation and execution of automated workflows that allow you to easily and visually integrate different apps, services, systems, and data sources. This is ideal if you want to trigger an event every time something or someone modifies a file on a storage account, or if an application scales out or up you might want a Logic App to notify an operator.

If the consumption of a Logic App is confined to system integration, I certainly think it has a place within your organization's arsenal, as long as your mindful that once the app starts to become complex, it is likely better suited in one of the other services we've discussed today.

App Services are one of the OG Azure services; they have been around in some form or another nearly since day zero, and they have continually evolved. App Services allow you to run API, mobile backends, and Web Apps easily in a fully-managed way. Like Function Apps, App Services gives you the ability to write your service in many popular programming languages as well as being able to run on either a Windows or Linux-based environment.

App Services can scale your service out or up, both manually and automatically. It allows tight integration with many identity and authentication systems such as Azure Active Directory (AAD). DevOps and automation can be set up seamlessly with GitHub, Azure DevOps and most other popular services. This allows for easy deployment, testing and environment progression of your service.

What service is right for you?#

Now that you have a better understanding of the options available to you on Azure Cloud , you will want to select the compute option that will be best for your scenario. I have put together the below decision flow diagram to help with the selection process!

Brendan Thompson

Principal Cloud Engineer

Azenix

Discuss on Twitter