Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies – InfoQ.com

Key Takeaways

Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. My intention with this article is to explain, in a pragmatic way, how to build, deploy, run, and monitor a simple cloud-native application on Microsoft Azure using open-source technologies.

This article demonstrates how to build a cloud-native application replicating real-case scenarios through a demo applicationguiding the reader step by step.

Without any doubt, one of the latest trends in software development is the cloud native term. But exactly, what is a cloud-native application?

Cloud-native applications are applications built around various cloud technologies or services hosted in the (private/public) cloud. Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. Often, they are distributed systems (commonly designed as microservices) but they also use DevOps methodologies to automate application building and deployment that can be done at any time on demand. Usually these applications provide API through standard protocols such as REST or gRPC and they are interoperable through standard tools, such as Swagger (OpenAPI).

The demo app is quite simple, but at the same time it involves a number of factors and technologies combined to replicate the fundamentals of a real-case scenario. This demo doesnt include anything about authentication and authorization because, in my opinion, it would bring too much complexity that is not required to explain the topic of this article.

Simple Second-Hand Store application

Simple Second-Hand Store (SSHS) is the name of the demo app described in this article.

SSHS system design overview.

SSHS is a simple cloud-native application that sells second-hand products. Users can create, read, update, and delete products. When a product is added to the platform, the owner of that product receives an email confirming the operations success.

When developing a microservice architecture, decomposing the business requirements into a set of services is actually the first step. A set of these principles are:

These simple principles help to build consistent and robust applications, gaining all the advantages that a distributed system can provide. Keep in mind that designing and developing distributed applications is not an easy task, and ignoring a few rules could lead to both monolithic and microservices issues. The next section explains by examples how to put them in practice.

It is easy to identify two contexts in the Simple Second-Hand Store (SSHS) application: the first one is in charge of handling productscreation and persistence. The second context is all about notification and is actually stateless.

Coming down to the application design, there are two microservices:

At a higher level, microservices could be considered as a group of subsystems that are composing a single application. And, as per traditional applications, components need to communicate with other components. In a monolithic application you can do it by adding some abstraction between different layers, but of course it is not possible in a microservice architecture since the code base is not the same. So how can microservices communicate? The easiest way to do it is through HTTP protocol: each service exposes some REST APIs for the other one and they can easily communicate, but although it may sound good at the first impression some dependencies are put in the system. For example, if service A needs to call service B to reply to the client, what happens if service B is down or just slow? Why does service Bs performances affect service A, spreading the outage to the entire application?

This is where asynchronous communication patterns come into playto help keep components loosely coupled. Using asynchronous patterns, the caller doesnt need to wait for a response from the receiver, but instead, it throws a fire and forget event, and then someone will catch this event to perform some action. I used the word someone because the caller has no idea who is going to read the eventmaybe no one will catch it.

This pattern is generally called pub/sub, where a service publishes events and others may subscribe to events. Events are generally published on another component called event bus that works like a FIFO (first in-first out) queue.

Of course, there are more sophisticated patterns other than the FIFO queue, even if it is still used a lot in real environments. For example, an alternative scenario may have consumers subscribing to a topic rather than a queue, copying and consuming messages belonging to that topic and ignoring the rest. A topic is, generally speaking, a property of the message, such as the subject in AMQP (Advanced Message Queuing Protocol) terms.

Using asynchronous patterns, service B can react to some events occurring in service Abut service A doesnt know anything about who the consumers are and what they are doing. And obviously its performance is not affected by other services. They are completely independent from each other.

NOTE: Unfortunately, sometimes using an asynchronous pattern is not possible, and even if using synchronous communication is an antipattern, there is no alternative. This shall not become an excuse to build things quicker, but keep in mind that in some specific scenarios, it may happen. Do not feel that guilty if you have no alternatives.

In the SSHS application, microservices dont need to have direct communication since the Nts service must react to some events that occur on the Pc service. This can be clearly done as an asynchronous operation, through a message on a queue.

For the same reasons exposed in the Communication between microservices paragraph, to keep services independent from each other, a different storage for each service is required. It doesnt matter if a service has one or more storages using a multitude of technologies (often they have both SQL and NoSQL), each service must have the exclusive access to its repository; not only for performance reasons, but also for data integrity and normalization. The business domain between services could be very different, and each service needs its own database schema that could be very different from one microservice to another. On the other hand, the application is usually decomposed following business bounded context, and it is quite normal to see schemas diverge over time, even if at the beginning they may look the same. Summarizing, merging everything together leads to the monolithic application issuesso why use a distributed system?

Notifications service doesnt have any data to persist, while the ProductCatalog offers some CRUD APIs to manage uploaded products. These are persisted in a SQL database since the schema is well defined and the flexibility given by a NoSQL storage is not needed in this scenario.

Both services are ASP.NET applications running on .NET 6 that can be built and deployed using Continuous Integration (CI) and deployment techniques. In fact, GitHub hosts the repository and the build and deployment pipelines are scripted on top of GitHub Actions. Cloud infrastructure is scripted using a declarative approach, to provide a full IaC (Infrastructure as a Code) experience using Terraform. Pc service stores data on a Postgresql database and communicates with Nts service using a queue on an event bus. Sensitive data such as connection strings are stored in a secure place on Azure and not checked in the code repository.

Before starting: the following sections dont explain each step in detail (such as creating solutions and projects) and are aimed at developers that are familiar with Visual Studio or similar. However, the GitHub repository link is at the end of this post.

To get started with the SSHS development, first create the repository and define the folders structure. SSHS repository is defined as:

Just focus on few things for now:

NOTE: Disable the nullable flag in csproj file, usually enabled by default in Net Core 6 project templates.

ProductCatalog service needs to provide APIs to manage products, and to better support this scenario we use Swagger (Open API) to give some documentation to consumers and make the development experience easier.

Then there are dependencies: database and event bus. To get access to the database, it is going to use Entity Framework.

Finally, a secure storage serviceAzure KeyVaultis required to safely store connection strings.

The new ASP.NET Core 6 application Visual Studio templates dont provide a Startup class anymore, but instead, everything is in a Program class. Well, as discussed in the ProductCatalog deployment paragraph there is a bug about using this approach, so lets create a Startup class:

Then replace the Program.cs content with the following code:

The next step is about writing some simple CRUD APIs to manage the products. Heres the controller definition:

The ProductService definition is:

And finally, defines the (very simple) DTOs classes:

The Owner property should contain the email address to notify when a product is added to the system. I havent added any kind of validation since it is a huge topic not covered in this post.

Then, register the ProductService in the IoC container using services.AddScoped(); in the Startup class.

Often cloud-native applications use Open API to make APIs testing and documentation easier. The official definition is:

The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.

Long version short: OpenAPI is a nice UI to quickly consume APIs and read their documentation, perfect for development and testing environments, NOT for production. However, since this is a demo app, I kept it enabled in all the environments. In order to include an attention flag, I put some commented-out code to exclude it from the Production environment.

To add Open API support, install the Swashbuckle.AspNetCore NuGet package in the Pc project and update the Startup class:

Enable the XML documentation file generation in the csproj. These documentation files are read by Swagger and shown in UI:

NOTE: Add into the appsettings.json file a section named SwaggerApiInfo with two inner properties with a value of your choice: Name and Title.

Add some documentation to APIs, just like in the following example:

Now, run the application and navigate to localhost:/index.html. Here, you can see how Swagger UI shows all the details specified in the C# code documentation: APIs description, schemas of accepted types, status codes, supported media type, a sample request, and so on. This is extremely useful when working in a team.

Even though this is just an example, it is a good practice to add the GZip compression to APIs response in order to improve performance. Open the Startup class and add the following lines:

To handle errors, custom exceptions, and a custom middleware is used:

The Pc application needs to persist datathe productsin storage. Since the Product entity has a specific schema, a SQL database suits this case scenario. In particular, Postgresql is an open-source transactional database offered to as a PaaS service from Azure.

Entity Framework is an ORM, a tool that makes the object translation between SQL and the OOP language easier. Even if SSHS does perform very simple queries, the goal is to simulate a real scenario where ORMsand eventually MicroORMs, such as Dapperare heavily used.

Before starting, run a Postgresql local instance for the development environment. My advice is to use Dockerespecially for Windows users. Now, install Docker if you dont have it yet, and run docker run -p 127.0.0.1:5432:5432/tcp --name postgres -e POSTGRES_DB=product_catalog -e POSTGRES_USER=sqladmin -e POSTGRES_PASSWORD=Password1! -d postgres.

For more information, you can refer to the official documentation

Once the local database is running properly, it is time to get started with Entity Framework for Postgresql. Lets install these NuGet packages:

Define entitiesthe Product class:

Create a DbContext classthat will be the gateway to get access to the databaseand define the mapping rules between the SQL objects and the CLR objects:

The DbSet property represents as an in-memory collection that the data persisted on storage; the OnModelCreating method override scans the running assembly looking for all the classes that implement the IEntityTypeConfiguration interface to apply custom mapping. Instead, the OnConfiguring override enables the Entity Framework Proxy to lazy load relationships between tables. This isnt the case since we have a unique table, but it is a nice tip to improve performance in a real scenario. The feature is given by the NuGet package Microsoft.EntityFrameworkCore.Proxies.

Finally the ProductEntityConfiguration class defines some mapping rules:

*It is important to remind that the Guid is generated after the creation of the SQL object. If you need to generate the Guid before the SQL object you can use HiLomore info here.

Finally, update the Startup class with the latest changes:

The database connection string is sensitive information, so it shouldnt be stored in the appsettings.json file. For debugging purposes, UserSecrets can be used. It is a feature provided by the dotnet framework to store sensitive information that shouldnt be checked in the code repository. If you are using Visual Studio, right click on the project and select Manage user secrets; if you are using any other editor, open the terminal and navigate to the csproj file location. Then type dotnet user-secrets init. The csproj file now contains a UserSecretsId node with a Guid to identify the project secrets.

There are three different ways to set a secret now:

The secret.json should look as follow:

Lets get down with the ProductService implementation:

The next step is about creating the database schema through migrations. The Migrations tool incrementally updates a checked-in file database to keep it in sync with the application data model while preserving existing data. The details of the applied migrations to the database are stored in a table called "__EFMigrationHistory". This information is then used to execute not-applied migrations only to the database specified in the connection string.

To define the first migration, open the CLI in the csproj folder and run

dotnet-ef migrations add "InitialMigration"it is stored in Migration folder. Then update the database: dotnet-ef database update with the migration just created.

NOTE: If this is the first time you are going to run migrations, install the CLI tool first using dotnet tool install --global dotnet-ef.

As Ive said, user secrets just works in a Development environment so Azure KeyVault support must be added. Install the package Azure.Identity and edit the Program.cs:

where is the KeyVault name that will be declared in the Terraform scripts later.

The ASP.NET Core SDK offers libraries for reporting application health, through REST endpoints. Install the Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore NuGet package and configure the endpoints in the Startup class:

The code above adds two endpoints: at the /health/ping endpoint the application responses with the health status of the system. Default values are Healthy, Unhealthy or Degraded, but they can be customized. Instead, the /health/dbcontext endpoint gives back the current Entity Framework DbContext status, so basically, if the app can communicate with the database. Note that the NuGet package mentioned above is the one specific for Entity Framework, that internally refers to Microsoft.Extensions.Diagnostics.HealthChecks. If you dont use EF, you can use this one only.

You can get more info in the official documentation .

The last step to complete the Pc project is to add a Dockerfile file. Since Pc and Nts are independent projects, it is important to have a single Dockerfile per project. Create a folder Docker in the ProductCatalog project, define a .dockerignore file and the Dockerfile:

NOTE: Dont forget to add a .dockerignore file as well. On the internet, there are plenty of examples based on specific technologies.NET Core in this case.

NOTE: If your Docker build stucks on the dotnet restore command, you faced a bug documented here. To fix it, add this node to the csproj:

and add /p:IsDockerBuild=true to both restore and publish commands in the Dockerfile as explained in this comment.

To try this Dockerfile locally, navigate with your CLI to the project folder and run:

docker build -t productcatalog -f DockerDockerfile ., where:

Then run the image using:

docker run --name productcatalogapp -p 8080:80 -it productcatalog -e ConnectionStrings:ProductCatalogDbPgSqlConnection="Host=localhost;Port=5432;Username=sqladmin;Password=Password1!;Database=product_catalog;Include Error Detail=true":

NOTE: The docker run command starts your app, but it wont work correctly unless you create a docker network between the ProductCatalog and the Postgresql containers. However, you can try to load the Swagger web page to see if the app is at least started. More info here.

Go to http://localhost:8080/index.html and if everything is working locally move forward to the next step: the infrastructure definition.

Now that the code is written and properly running in a local environment, it can be deployed in a cloud environment. As mentioned earlier, the public cloud we are going to use is Microsoft Azure.

Azure App Service is a PaaS service able to run Docker containers, that suits best for this scenario. Azure Container Registry holds the Docker images ready to be pulled from the App Service. Then, an Azure KeyVault instance can store application secrets such as connection strings.

Other important resources are the database serverAzure Database for Postgresqland the Service Bus to allow asynchronous communication between the services.

To deploy Azure resources, no operation is required to be executed manually. Everything is writtenand versionedas a Terraform script, using declarative configuration files. The language used is HashiCorp Configuration Language (HCL), an agnostic-cloud language that allows you to work with different cloud providers using the same tool. No Azure Portal, CLI, ARM or Bicep files are used.

Before working with Terraform, just a couple of notes:

Terraform needs to store the status of the deployed resources in order to understand if a resource has been added, changed, or deleted. This state is saved in a file stored on the cloud (storage account for Azure, S3 for AWS, etc.). Since this is part of the Terraform configuration, it is not possible to do it through script but instead, it is the only one operation that must be done using other tools. The next sections explain how to set up the environment using az CLI to create the storage account and the IAM identity that actually runs the code.

NOTE: You can not use the same names I used because some of the resources require a unique name across the entire Azure cloud.

Create a resource group for the Terraform state

Every Azure resource must be in a resource group and it is a good practice to have a different resource group for each application/environment. In this case, I created a resource group to hold the Terraform state and another one to host all the productive resources.

To create the resource group, open the CLI (Cloud Shell, PowerShell, it is up to you) and type:

az group create --location --name sshsstates

Create a storage account for the Terraform state

Storage account is a resource that holds Azure Storage objects such as blobs, file shares, queues, tables, and so on. In this cases it will hold a blob container with the Terraform state file.

Create one by running:

where location is the location of the resource group created at the previous step.

Then, create the blob container in this storage account:

where sshsstg01 is the storage account name created at the previous step.

See more here:

Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies - InfoQ.com

Related Posts
This entry was posted in $1$s. Bookmark the permalink.