Azure SQL Database – How to Choose the Right Service Tier

So you've decided to use Azure SQL Database. You head for the portal and are suddenly faced with a number of questions. Sure, the choices you make have a number of repercussions, but what are they? You need a simple guide to get started.

The cloud is a reality that is faced by working database consultants. They are involved with many projects that include cloud solutions, mostly on Azure or AWS. My aim in this article is to help you to understand about the issues that affect how you choose a service tier, and particularly how to make sure that the tier you choose is adequate for your database. Even if you already know what a “Service Tier” is, I hope that this article will have some information that will help you when working with SQL Databases.

Choosing a Service Tier is not as straightforward a task as it may seem. This is a very important decision because it affects more than the performance of the database.

On-Premise vs. Cloud

Azure SQL Database is the relational Database-as-a-Service product from Microsoft. This means that Microsoft take care of all the instance-level administration tasks, so that the customer needs only to take care of his own “space” – the database model and basic settings related to Business Continuity – in other words the high availability (HA) and disaster recovery (DR) strategy.

Azure offers more this product, but SQL Database is the choice that requires the least administration work from you, and far less than the opposite extreme, where you opt to have a ‘dedicated’ SQL Server installed on a physical server in your data center. There are also intermediate ‘hybrid’ options, where you can keep your server on premise, but supplement this with some cloud-based services, such as having an Availability Groups disaster recovery replica in Azure.

2442-1-104289f5-728e-4bde-aba6-3732e18ba

The right choice always depends on the available budget, any special requirements for your data, and the degree to which you need to control the server’s administration. However, your budget is going to sway your decisions: if you want dedicated professionals, a data center and your own servers, you pay much more than if you share a server environment with other customers. There are advantages and disadvantages to every choice.

Where is my instance?

Azure SQL Database has no concept of a “SQL Server instance” that is equivalent to an on premise instance. However you can create a SQL Server ‘logical instance’, and relate your databases to it. This way, you can connect to this single instance and see all the related database, giving the illusion that we are working on a regular on premise SQL Server instance.

Accessing my database

The Azure Portal is the place where all the configuration work can be done, but SQL Server Management Studio supports Azure SQL Database, so you can connect by providing the address of your logical instance, which should be “<Logical Instance Name>.database.windows.net” and indicating “SQL Authentication” as the authentication method. Make sure that your IP address is allowed to connect, on SQL Database’s firewall.

SQL Server Management Studio (SSMS) is now a separate product and so is, from SQL Server 2016, no longer included in the installation binaries. In order to install SSMS, you will need to download it separately. This may seem like bad news, but it is a great advantage to have it separate because it can now keep up with all the changes with SQL Azure and ASDW as well as all the supported on premise versions of SQL Server. It is now independent in terms of patches and releases.

Service Tiers on my Pillow

When you create a SQL Database for the first time, you will bump into a required setting: the service tier. The first impression is that a “Service Tier” is related to price, and so the natural urge will be to choose the cheapest one. Resist the urge!

A service tier is more than a “price”: In fact, the price merely reflects the features that the service tier offers. The more features you need for your applications, the more you pay. That’s the cloud law.

Keep in mind that when a service tier is selected, you are also defining the range of options for business continuity that you have, the maximum allowed size, the backup retention period and of course the desired performance. Confusing? Yes, this is not that simple.

A service tier has two main variables: the SQL Database Edition and the Database Throughput Units (DTU).

There are three SQL Database editions: Basic, Standard and Premium. The DTUs can vary from 5 to 1750 and are grouped by edition.

What are the differences between editions?

The SQL Database edition on Azure is equivalent to the on-premise SQL Server edition in that a cheaper edition has fewer supported features than a premium one and this will, proportionally, affect the price that you pay.

Each edition has a range of features and limitations, such as the database size, point-in-time restore window, business continuity options and the number of DTUs that you can choose.

Maximum database size

Azure SQL Database supports up to 1 Tb of data in the Premium tier only. At the opposite extreme, the Basic edition supports databases up to 2 Gb. With the Standard tier, you can store databases up to 250 Gb.

Putting it another way:

  • Database Size <= 2 Gb – Supported by all three editions;
  • Database Size between 2 Gb and 250 Gb – Supported by Standard and Premium;
  • Database Size between 250 Gb and 1 Tb – Supported by Premium edition only;
  • Database Size > 1 Tb – Not supported (so far);

Based on this information, it is already possible to understand on which edition your database cannot run.

But a database is not only “a size”, there are other factors that may influence on the Edition choice, and know the Recovery Time Objective (RTO – targeted time to recovery a system after a failure) and Recovery Point Objective (RPO – maximum period in which data might be lost after a failure) is mandatory, because this will influence the available Business Continuity options and also the number of days back in time that you can go return to when using a point-in-time restore.

Business Continuity Options

Here is a summary of the available Business Continuity options:

  • Geo-Restore: Allows the use of the last daily backup to a region different from the region where your database is located. This makes use of the backup geo-replication. This option is available on all the editions.
  • Standard Geo Replication: We can compare this option to a kind of “log shipping”. A fixed disaster recovery pair needs to be defined and, if needed, you can initiate a failover from the primary database to the configured DR pair. The DR pair is non-readable. This option is available on Standard and Premium editions.
  • Active Geo-Replication: In the same way that I would compare the Standard Geo-Replication with “log shipping” I can also compare the Active Geo-Replication with the “AlwaysOn Availability Groups”. Using this option, we will have the capability of configure up to four readable secondaries spread on different regions. This is the most powerful Business Continuity option for SQL Database, but it is only available on Premium edition.
  • Point-in-Time restore: If we select a DaaS product, we don’t need to perform certain administrative tasks that would be necessary with an on premise installation. Backups are included on this. For this reason, we have an option to do point-in-time restores, which works pretty much as an on premise instance. The point here is how far you can get on backup retention, which is dependent on the SQL Database edition. Following is the list, per edition, of the farther you can get when talking about point-in-time restore:
    • Basic – 7 days back;
    • Standard – 14 days back;
    • Premium – 35 days back;

Finally: Performance on SQL Database

Performance. Maybe this is the factor that attracts more interest, and in fact this is the one that makes the difference on both the “user feeling”, when accessing the database, and the bill to be paid in the end of the billing period.

On SQL Database, the “horsepower” is measured by Database Throughput Units, or just “DTUs”. This unit is measured by an integer and may variate from 5 to 1750. Every database edition has an offer of one or more “Service Objectives”, which are directly related to the number of DTUs and the price to be played.

In the following image, you can find the list of “Service Objectives” (S0, P3, Basic, P11, S3, etc…) per SQL Database Edition and its respective prices. Notice that Microsoft is always updating its offer, so those prices and Service Objectives per Edition may be outdated when you read this article:

2442-1-b04701d2-e3bb-4a6f-9cf1-eeeab791a

When a SQL Database is created, you need to pick up one option from the list of Editons + Service Objectives, and a questions that most of the times comes up is: “How many DTUs does my database need?”.

Before answer this question, another question needs to be answered: “What is a DTU?”.

Well, I already mentioned that DTU stands for Database Throughput Units, but what really a DTU measures? What does that number really mean? In a nutshell, a DTU is a unified measure of CPU, Memory and reads/write rates. The bigger is the DTU, the better will be the performance of your database.

Having said that, we can then go back and answer the first question: In order to know the minimum number of DTUs that your database needs, there are two approaches:

  • Guess;
  • Use a more “scientific” way to get the right value;

You may think that the first option is a joke, but in fact, this is a valid option. Even with all the associated risks, you can choose a number of DTUs that your intuition tells you is right, and adjust this number based on your subsequent experience of what the database actually needs. Thanks to the scalable cloud model, if your database needs more horsepower, you can scale-up: If the current number of DTUs is too high, you can scale-down… You can also be very precise and hit the bull’s eyes.

The second option is a more conservative, responsible and dignified way to choose the number of DTUs, and is based on real data about your database activity. I am talking about the DTU Calculator (http://dtucalculator.azurewebsites.net/), an online service that helps us by advising about the most appropriate Service Objective for a database. You just need to download a PowerShell script, available on the DTU Calculator website, and run it in the server where your database is located. As soon as you run this script, the following data will be measured and recorded in a CSV file:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

Once the collection is done, you just need to upload the file generated by the script and interpret the results. Here is a sample of one of the charts generated by the DTU Calculator, indicating that 89.83% of the database load would run well with the Service Objective S3, of the “Standard” SQL Database edition.

2442-96d68ff2-c2f0-44aa-8b38-567530fc357

It is very important to understand that this result can only correspond to the performance during the time of the data collection. You need to also evaluate whether the proposed service objective is valid for the business continuity requirements. For example, if you have a requirement to configure an Active Geo-Replication, you cannot select the “Standard – S3”, because this BC option is just available on the Premium edition.

Summary: Putting all the Dim Sum on the same plate

Now you know all you need to consider when choosing the right service tier for your SQL Database, but that’s a lot of information. Here is a decision tree that will help you to reach the optimal point for your database.

2442-1-d80c36ff-5cac-4517-87db-ec5a3825b

Choosing the right service tier is not an easy, because it is not just the performance that matters. In this article we explained all the other factors you need to consider as well as how to find the service tier that best fits your requirements for database performance and business continuity, as well as the price that fits your budget.

  • 8286 views

  • Rate
    [Total: 28    Average: 4.4/5]
  • George

    Thank you!
    Finally, someone has explained this stuff in an understandable way.

  • Carl Rabeler [carlrab@MSFT]

    Good stuff – backup retention period for standard is now 35 days. 🙂

    • Carl Rabeler [carlrab@MSFT]

      also all tiers now support active geo-replication (standard geo-replication is deprecated)

      • Murilo Miranda

        Thank you for reading Carl!

        You are right. I already sent a request to update the article and will also add a link to the official documentation, as things related to Azure are always being updated…

        Cheers!

  • Murilo Miranda

    I am glad you liked!

  • Vijayaraghavan TV

    Well written. I have read about this in various posts, but yours is well articulated and gives a step by step approach to easy understanding.

  • Pini Krisher

    Hi
    Great post.
    1 comment: Standard – secondary – can be readable.

  • Warren van Rooyen

    This was simply awesome. I’m new to the Azure and this gave me some great insight.

  • ALZDBA

    Nice work, Murilo !

  • Bill Gibson [billgib@MSFT]

    Nice article. A minor correction, the acronym DTU is now interpreted as ‘database transaction units’ which we have found is more understandable. Nothing else about DTUs has changed. You can find out more about DTUs here: https://azure.microsoft.com/en-us/documentation/articles/sql-database-what-is-a-dtu/

  • Murilo Miranda

    FIrst of all, my “thank you” to everyone who is reading this article.

    As you may notice Azure is evolving a lot (and this is awesome) and it is very difficult to keep the article updated.

    Here is a quick summary of what changed since I wrote the article:
    – The retention period is now 35 days for Standard
    – On Business Continuity side, Active Geo-Replication is now the only option and it is supported for Basic, Standard and Premium.
    The Active Geo-Replication is now enforcing Read-Only replicas (the non-readable option is greyed out)
    – There’s a new Service Tier: P15, which offers 4000 DTUs
    – DTU now means: Database Transaction Units

    For more information, check this link: https://azure.microsoft.com/en-us/documentation/articles/sql-database-what-is-a-dtu/

    Again, thank you Bill, Warren, Johan, Pini, Vijayaraghavan, Carl and George.