Migration to Azure

Part
01
of four
Part
01

On-premise application transition to Azure cloud: Best Practices

Three best practices for transitioning an on-premise application to the Azure cloud include extensively planning the transition beforehand, incorporating security from the beginning, and practicing proper data and resource management. Two common mistakes often made when transitioning to the Azure cloud include not anticipating the costs of using Azure, and not having a research framework in place.

Methodology

In order to identify the best practices and most common mistakes made when transitioning an on-premise application to the Microsoft Azure cloud, we searched technology media websites such as Sam Solutions and Computer World , Azure cloud training websites such as Cloud Academy and Cloud Best Practices, articles offering insights into Information Technology, and Microsoft's official Azure migration guide for information about best practices and mistakes. We identified which practices and mistakes were most frequently mentioned in the chosen articles and guides, and used these as our three best practices and the two most common mistakes. The most commonly mentioned best practices and common mistakes were also corroborated by Microsoft's official Azure documentation, further revealing that we had identified the true best practices and most common mistakes.

Best Practice: Extensively Planning Beforehand

Extensive planning is essential when transitioning an on-premise application to Microsoft Azure's cloud because, according to Sam Solutions author Natalia Sakovich, any transition to a cloud-based service "will majorly affect a company's daily workings" and change daily routines for employees. Sakovich also reveals that it is important for companies to ask 'why' about their intention to switch to Azure cloud before asking 'how'. This is because Azure may not be the best solution for every company.
According to the Cloud Academy website, planning a cloud-based transition is vital because early preparation provides time for employees to be trained in how to use cloud-based systems. Organizations that have made use of cloud training for employees are "80% more likely to adopt a fully cloud-based platform" and "three times more likely to achieve their goals for innovation", according to Cloud Academy. Early preparation for a cloud-based transition also helps to improve communication with employees regarding the technology that they use every day.
The Cloud Best Practices website reports that a key determination during early preparation is whether the cloud-based migration "will be IaaS (Infrastructure as a Service) or PaaS (Platform as a Service)." Early preparation for a cloud-based transition should also include the following six hallmarks: Measurability, Guidance, Practicality, Continuity, Specificity, and Accountability, according to Cloud Academy.

Best Practice: Incorporating Security From The Beginning

Microsoft Azure has unique, built-in security features that are "important to understand fully from the start", according to the Cloud Academy website. One of these features is the ability to choose how data is accessed, and by whom. The 5Nine technology blog points out that although Azure offers unique security features, it is the user's responsibility to "understand how to use the virtual aspect of security effectively." The blog also discusses how Azure security decisions should be discussed within an organization before decisions are made.
The Microsoft Azure website also recommends that security research be done from the beginning of the transition, to ensure that Azure is "compatible and accepted within the industry."

Best Practice: Proper Data And Resource Management

According to the 5Nine technology blog, Azure users must "understand how existing resources would be defined by Azure." Therefore, it is important for organizations to define existing resources within Azure's parameters. According to 5Nine, existing resources can also be recreated within Azure rather than transferring them to the new system, which can save time and money. The Cloud Best Practices website explains that making important decisions about the transition of existing resources can "play a critical role in deciding between IaaS and PaaS."
According to Cloud Academy, Microsoft Azure offers tools that help users locate resources that may need to be defined in Azure parameters. The Microsoft Azure documentation also reveals that Azure contains templates that help users to "create an outline for your resource management and migration."

Common Mistake: Not Anticipating Costs Of Using Azure

Common Mistake: No Research Framework

  • A research framework must be in place when transitioning an on-premise application to Microsoft Azure's cloud, because a lack of training can cause existing data to not be "defined within Azure's parameters", according to Cloud Academy. This mistake can cause errors within the Azure application.
  • The Soft Choice website reveals that the lack of a research framework when transitioning to Azure may lead to all company resources being transferred to the new system when an organization should actually "look through resources and take note of those that are suitable for migration." This will save time and money.
  • Having a research framework in place also helps an organization to define how long it will take to transfer their resources and how much bandwidth to allocate for the project, according to Sam Solutions. The Cloud Academy website reveals that the Azure Solutions Architecture Suite "can assist in building your desired features within Azure."

In conclusion, best practices for transitioning an on-premise application to Microsoft Azure's cloud include extensively planning beforehand, incorporating security from the beginning, and practicing proper data and resource management. The most common mistakes made when transitioning to Azure include not anticipating the costs of using Azure and not having a research framework in place.
Part
02
of four
Part
02

On-premise application transition to Azure cloud: technical process

The technical process for transitioning an on-premises application to Microsoft Azure’s Cloud requires users to go through four major steps, which are; assess, migrate, optimize, and secure and manage.

STEPS IN TRANSITIONING FROM AN ON-PREMISES APPLICATION TO MICROSOFT AZURE'S CLOUD

1. ASSESS

The first step in transitioning from an on-premises application to Microsoft Azure's Cloud is to establish a plan of action (assess) in terms of what is necessary to make the switch. It is essential to explore the application environment and identify which apps will allow for a smooth and quick migration, based on little dependency. Making contact with the leading persons that represent the IT department and managers/owners is vital, to ensure that everyone agrees, and the perceived goals are understood. Calculate and compare the cost of ownership (TCO) using Azure versus on-premise applications and create an inventory of all servers that are running in your organization. Map information from inventories and use that information to identify those that are most needed and then use the assessment tools to suggest migration strategies, then choose the tools that best suit your business and future investments.

2. MIGRATE & MODERNIZE

Choose the migration strategy that is most suitable from the options listed below:
  • Option 1 — Select Rehost, which transfers data as they were on-premises.
  • Choose the most suitable migration tool.
  • Option 2 — Select Refactor, which makes changes to the data or use it for additional purposes.
  • Option 3 — Select Rearchitect, which makes adjustments in app design to facilitate business growth and investment for future benefits.
  • Option 4 — Select Rebuild, which recreates an app that was once existent from scratch, then utilizes it in the Cloud.
Make sure to use the recommended technology to assist in the process.

3. OPTIMIZE

When optimizing, it is best to monitor the new Cloud application continuously for consistency. Monitor costs associated with the Cloud by using Azure Cost Management, and keep the workload on virtual machines in-check for efficiency and saving using Azure Hybrid Benefit and Azure Reserved Virtual Machine Instances. Then, re-invest what was saved to add more to the Cloud and reap more benefits.

4. SECURE AND MANAGE

Azure Security Center helps to control Cloud security, detect threats, and reduce exposure. Backup app data in Azure Backup for increased data protection, at a reasonable cost, and monitor Cloud optimization in terms of performance, infrastructure, and disk memory using Azure Monitor, Log Analytics, and Application Insights.
Part
03
of four
Part
03

Azure Cosmos DB configuration: Best Practices

When configuring an Azure Cosmos DB, it is best to evaluate request unit needs, utilize a data migration tool, and use unique keys for coding different collections. It is also good to avoid common mistakes and problems, like not utilizing enough request units, trying to configure collections as code, not creating a shard key, and not understanding how an Azure Cosmos DB queries partitioned data.

Methodology

Before delving into the request, we decided to understand Azure Cosmos DB. We were able to find a surplus of information in reputable sources like Microsoft, detailing that Azure Cosmos DBs are very complex, and that one mistake within its configuration can lead to user requests being throttled. So to look for the best practices for the program, we decided to look for the worst practices and common mistakes first. We looked into user complaints, and then we decided to compile these complaints as part of the program’s common issues. We then found the program’s common issues we looked into the best way to avoid the issues. We used logic and deductive reasoning to create effective solutions for the issues.

What is Azure Cosmos DB and How is it Useful?

Azure Cosmos DB is created by Microsoft, and it is a global, multi-model database service that helps users gain access to various data worldwide in a secure and fast manner. Users do not have to worry about latency issues because the program effectively copies data, and it also saves its replications in various worldwide databases. Users then interact with replicated data that is closest to them, and this gives users minimal latency issues. Furthermore, Azure Cosmos DB has elastic scalability, and it is always consistent, and highly responsive to user input. Moreover, the program is maintained and managed, so users do not have to worry about the program. Instead, they could focus on other business priorities.

There are three best practices when using Azure Cosmos DB. First is to evaluate request unit needs. Second is to use a data migration tool. The third is to utilize unique keys.

Evaluate Request Unit Needs

To avoid overspending on the program, it is essential to know the number of request units going to be utilized. Request units represent the work capacity in Azure Cosmos DB, and having minimal request units will lead to some requests timing out, and having too much will lead to overspending on the program. It is essential to use the capacity planner to find the right amount of request units needed for a given amount of data. Moreover, users can utilize the query explorer to see the cost of queries.

Using a Data Migration Tool

Data Migration Tool helps with transferring data from a source and into Azure Cosmos DB. With the use of a data migration tool, almost all kinds of data can be transferred into the program. Moreover, with the use of such a tool, users no longer have to configure data and applications from scratch, because the tool can quickly transfer and set it up for them.

Utilize Unique Keys

Utilizing unique keys can help create another layer of security and data integrity within the program. Additionally, the keys can assign different titles to one or more different values, within the logical partition. However, it is important to remember that once the unique keys are configured, they cannot be changed. So users have to make sure that the unique keys they are using are accessible to them.

Common Mistake: Attempting to Configure Collections as Code

Users make the common mistake of configuring collections as code in Azure Cosmos DB. It is important to know that the program uses ARM templates and not infrastructure as code (IaC). ARM templates do not have a way to specify collections as code. However, the usage of IaC can still be achieved in the program, but users will have to come up with their unique solution, besides the use of existing ARM templates.

Common Mistake: Not Creating a Shard Key and Partitioning Data

Another mistake users do when using the program is not creating shard keys. Shard keys help traffic to be distributed across partitions. When they are not made, it overloads some partitions, while some receive minimal traffic. Additionally, partitioning data can make queries in the program limited, because it restricts the kind of queries to be used in different partitioned collections.

Common Mistake: Not Provisioning Enough Request Units

Some users also forget to provision enough request units to handle their outputs. The program halts requests when there is a lack of request units, leading to a slow down in the process. However, users will receive a display timer when they can try the request again after the program’s throttle.
Part
04
of four
Part
04

Azure Cosmos DB configuration: technical process

Azure Cosmos DB Configuration process consists of creating a Cosmos DB account. Moreover, it can be set up and configured in a few minutes using only a web browser.

Azure Cosmos DB Configuration

In configuring the Azure Cosmos DB, it is required for one to have an account. Once an account is made, the user can then create a new collection named UserItems, and from there, they could specify a partition key. Additionally, it is essential to remember that the Azure Cosmos DB is built on multiple data models, and this includes being able to document, graph, key-value, table, and columnar data models. These are supported by the software developer’s kit (SDK), which is available in multiple languages. Moreover, when developing an application through the use of Azure Cosmos DB, the following programming languages are available: SQL API, MongoDB API, Table API, Gremlin API, and Cassandra API.

Technical Process

Step 1- Prerequisites

It is vital to download and use the free Visual Studio 2017 Community edition for the configuration of Azure. It is also important to make sure that Azure development is enabled during the Visual Studio 2017 setup. From there, the user must create a Cosmos DB trial account, or if the user has an Azure subscription, that can be used too. Moreover, the Azure Cosmos DB emulator can also be used with the URL of https://localhost:8081. The primary key will then be provided when authenticating requests. The user can follow the Azure Cosmos DB Emulator guide to set up the emulator. After that, the setup tutorial of the Visual Studio solution can then be accomplished.

Step 1A- System Requirements

It is paramount that the user has the software requirements of Windows Server 2012 R2, Windows Server 2016, or Windows 10, 64-bit operating system. In addition to this, hardware requirements are also essential, and this includes a 2-GB RAM and 10-GB available hard disk space.

Step 2- Selecting an API

There are some important things to note when selecting an API. Core (SQL) and MongoDB API are used primarily for document data, while Gremlin is used for graph data. Other APIs include the Azure Table and Cassandra. Specifically, the SQL API is mainly used for creating new non-relational document database. It also used by those who want to query using the familiar SQL syntax. While Gremlin API is used to build a graph database model and a traverse relationship among different entities. Moreover, MongoDB is used for migrating data from a MongoDB database to Azure Cosmos DBs managed service. While Table API is used to migrate data from Azure Table storage to Azure Cosmos DB’s premium table offering. Lastly, the Cassandra API is used to migrate data from Cassandra to Azure Cosmos DB’s storage.

Step 3- Creating an Azure Cosmos DB Account

The user must first sign into the Azure portal. From there, the user can "select create a resource", then choose databases, and then choose Azure Cosmos DB. While in the Azure Cosmos DB account page, the user can then select the basic settings for the account. This includes subscription, resource group, account name, the API, and the location. After the basic settings have been inputted, the user can then review the settings, and then select create. It usually takes a few minutes before the account is created. Once the account is created a portal page will then be displayed. It will show that the deployment is complete.

Step 4- Get the Completed Solution

After the prerequisite steps are done and installed, the user must go to the downloaded GetStarted.snl solution file. It can be found in Visual Studio. From there, the user must select Solution Explorer, and then right-click on “GetStarted” project and then select “Manage NuGet Packages.”

Once in the NuGet Tab, the user must select "Restore the references to the Azure Cosmos DB.NET SDK." The user must then update the EndpointURL and primary key values found in the App.config file. The primary values are described in the “Connect to the Azure Cosmos DB account” section. The user can then select Debug and then “Start Without Debugging,” (or press Ctrl + F5) to run the app.

Step 5- Connect to the Azure CosmosDB Account

The user can then go back to Solution Explorer. From there, the user must select Program.cs. It should produce a code editor window. The following references must then be added at the beginning of the file: using System.Net; using Microsoft.Azure.Documents; using Microsoft.Azure.Documents.Client; and using Newtonsoft.Json.

Two constants and the client variable must then be added to the public class program. An example would be the EndpointUrl and the primary key values that were updated in the App.config file back in step two. The two variables will be used to connect to Cosmos DB through the API. From there, the user must go back to the Azure Cosmos DB account left navigation. In the panel, select “Keys,” and then copy the keys from the Azure portal and paste them into the code. Under “Read-write Keys,” copy the URL value using the copy button at the right. The URL value must then be pasted into “<your endpoint URL>” found in Program.cs. After this, copy the primary key value and then paste it into “<your primary key>” found in Program.cs. The user must then post a new asynchronous task called “GetStartedDemo.” This will then instantiate a new document client called “Client.” From there, the “GetStartedDemo” task can then be started. The task will catch exceptions and then write them to the console. Pressing F5 will then run the app, and once the user sees the message “End of Demo,” the user can then press any key to exit the window. This means that the connection was successful.

Step 5A- Creating A Database

An Azure Cosmos DB database should be created because it will be the container for the JSON document storage that is partitioned across different collections. The database can be created in the “CreateDatabaseIfNotExistsAsync” document client class. Furthermore, a collection is also a known container of JSON documents that are associated with JavaScript application logic. The collection can be done through the same method.

It is important to note that Azure Cosmos DB supports the deleting of databases. This process can remove the database alongside its child resources, which includes collections and documents.

Step 5B- Creating JSON Documents

JSON Documents can be created using the “the CreateDocumentAsync” method found in the document client class. It is important to remember that JSON documents must have their ID serialized as JSON.

Step 5C- Query Azure Cosmos DB Resources

Additionally, it is important to remember that Azure Cosmos DB uses LINQ and Azure Cosmos DB SQL syntax to run a query against sample documents.

Conclusion

After all the steps are followed, the system will then be successfully connected to the user’s Azure Cosmos DB account. This can then be accessible by internal reporting applications.
Sources
Sources

From Part 03
Quotes
  • "Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. With a click of a button, Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure regions worldwide. You can elastically scale throughput and storage, and take advantage of fast, single-digit-millisecond data access using your favorite API including SQL, MongoDB, Cassandra, Tables, or Gremlin. Cosmos DB provides comprehensive service level agreements (SLAs) for throughput, latency, availability, and consistency guarantees, something no other database service offers."
  • "Cosmos DB enables you to build highly responsive and highly available applications worldwide. Cosmos DB transparently replicates your data wherever your users are, so your users can interact with a replica of the data that is closest to them."
Quotes
  • "In Azure, IaC often comes in the form of Azure Resource Manager (ARM) templates. While you can deploy the Cosmos DB account via an ARM template, there is no way to specify the collections as code. This can lead to interesting problems in deployment pipelines"
  • "Ultimately, we achieved IaC by coming up with our own JSON representation of collections and writing an idempotent script to establish them using the Azure CLI. This works, but it’s more complicated than a first class ARM solution would be. Despite much demand from the community, Microsoft seems content to leave collections out of the ARM templates, so this is something we’ll have to live with for the foreseeable future."
  • "If any single partition exceeded its share of 10k RUs, then that partition would get rate limited until we scaled up the entire collection’s throughput. This hurt because the other partitions were seeing less traffic and didn’t need to be scaled up. When considering whether to partition your collection, try to come up with a shard key that will spread traffic evenly across your partitions."
Quotes
  • "You define that level of performance by provisioning, for each container of your database, an amount of Request Units; more precisely, you set how many Request Units you expect the container to be able to serve per second. Provisioned Request Units can start low and scale to tens of thousands or even more."
  • "hey define what I would call a “work capacity”. Each request you issue against your container — any kind of request: reads, writes, queries, executions of stored procedures etc. — has a corresponding cost that will be deducted from your RU credits. So if you provision 400 RU/second and issue a query that costs 40 RU, you will be able to issue 10 such requests per second; any request beyond that will get throttled."
Quotes
  • "Use this calculator to determine the number of request units per second (RU/s) and the amount of data storage needed by your application. Read the Request Units in Azure Cosmos DB article for more information. "
Quotes
  • "Unique keys add a layer of data integrity to an Azure Cosmos container. You create a unique key policy when you create an Azure Cosmos container. With unique keys, you make sure that one or more values within a logical partition is unique. You also can guarantee uniqueness per partition key."
Quotes
  • "Lets say you want every document to have an unique combination of name and country. And secondly, you want to have unique list of user titles inside your users array. Its actually quite easy to do that, but the sad part is that you can only add unique keys at the creation of a collection. After that you’re not able to edit them anymore."
Quotes
  • "You can import from JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and even Azure Cosmos DB SQL API collections. You migrate that data to collections and tables for use with Azure Cosmos DB. The Data Migration tool can also be used when migrating from a single partition collection to a multi-partition collection for the SQL API."