John McCormack DBA

SQL Server Databases and Cloud

  • Personal
    • About
  • Free Training
    • SQL Server on Amazon RDS (Free Course)
    • Free practice questions to help you pass DP-900
  • Save money in Azure
    • Azure IaaS SQL Backups – Stop burning money
    • Your Azure SQL Database and Managed Instance is too big
    • Turn the cloud off at bedtime to save 70%
    • Your Azure SQL Virtual Machine might be too big
    • Save money with Azure SQL DB serverless
    • Save up to 73% with reserved instances
    • Delete unused instances to save money in Azure
  • Hire me
    • 60 minute cost optimization
    • Let me solve your SQL Server problems
    • Take a look at my Sessionize speaker’s profile

You don’t need to run CHECKDB on Azure SQL DB

12th February 2021 By John McCormack 2 Comments

right or wrong sign for dbcc checkdb
Featured image by Tumisu from Pixabay

CHECKDB on Azure SQL DB?

This is a selfish post because I’ll probably forget and go googling for info on this topic, so I though I would link to an authoritative answer on the question, do you need to run DBCC CHECKDB on Azure SQL DB?

https://azure.microsoft.com/en-gb/blog/data-integrity-in-azure-sql-database/

Microsoft distinguished engineer Peter Carlin says you don’t need to, it is monitored by Azure on your behalf. And if something was to go wrong, they will also work with you to put things right.

Although Peter says you can run them if you want, his post gives me enough peace of mind to stop running these on my Azure SQL DB instances. Let me know if you agree in the comments.

For more Azure content, check out my Azure tag.

Filed Under: Azure SQL DB, front-page Tagged With: azure sql db, DBCC, integrity checks

How to find out Azure SQL DB Startup time

10th February 2021 By John McCormack 1 Comment

Alarm clock and cup of tea
Wakey Wakey SQL DB

Azure SQL DB Startup time

The query

SELECT sqlserver_start_time FROM sys.dm_os_sys_info;

Why write a post about this?

The traditional methods used for to find the start up time for SQL Server don’t work in Azure SQL DB.

I searched high and low to find this and thought I’ve got to share, and hopefully make it search engine friendly. A traditional google or bing search wasn’t bringing up the best way to find this out. I saw a lot of complicated queries to pull data, convert it and estimate start up time using functions and all kinds of magic. Actually, it turns out that the start up time is provided for you in sys.dm_os_sys_info (which works in Azure SQL DB). It also work in SQL Server which is a bonus.

Permissions to check Azure SQL DB Startup time

The permissions you need to run this query vary depending on the pricing tier of your Azure SQL DB. I ran this successfully on a P2 and a basic tier database.

On SQL Server, requires VIEW SERVER STATE permission.
On SQL Database Basic, S0, and S1 service objectives, and for databases in elastic pools, the Server admin or an Azure Active Directory admin account is required. On all other SQL Database service objectives, the VIEW DATABASE STATE permission is required in the database.

https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-sys-info-transact-sql?view=sql-server-ver15

I hope this one liner t-sql script was useful. Please let me know in the comments if you know a better way to get the start up time of Azure SQL DB, using T-SQL.

While you’re on my site, why not take a look to see if you can save money on your Azure SQL DBs, and other Azure resources by looking at my guide: 7 ways for data teams to save money in Azure.

Featured image by congerdesign from Pixabay

Filed Under: Azure SQL DB, front-page Tagged With: azure, azure sql db, azure sqldb managed instance

7 ways for data teams to save money in Azure

19th January 2021 By John McCormack Leave a Comment

Save money in Azure

save money in azure - piggy bank with sunglasses

In this series of posts, I list 7 ways to quickly save money in Azure by using cost optimisation principles. Get on top of your cloud costs and start saving money by putting these into action. Whilst this series is specific to Azure, most of the principles can be applied to other public cloud providers. The only thing that differs is the product. Think RDS as an equivalent for SQL DB or S3 for Storage Accounts.

It’s important to keep on top of your cloud costs. Operating in the cloud means it is easy to just spin up new instances in minutes. Whilst that is great and allows company to work in agile manner, the switch to an Operational Expenditure (OPEX) model means that the cost increase can be gradual and go unnoticed.

1. Review your backup retention policy

Azure IaaS SQL Backups – Stop burning money

2. Right size your Azure SQL DBs and managed instances

Your Azure SQL Database and Managed Instance is too big

3. Turn non production instances off out of hours

Turn the cloud off at bedtime to save 70%

4. Right Size your VMs to save money in Azure

Your Azure SQL Virtual Machine might be too big

5. Consider moving to Azure SQL DB serverless

Save money with Azure SQL DB serverless

6. Reserved instances

Save with reserved instances – even up to 73%

7. Delete unused instances

Delete unused instances to save money in Azure

Bonus steps:

This mini series of steps are also important.

  1. Implement Elastic Pools to share resources
  2. Look to see if you are eligible for Azure Hybrid Benefit

Everybody wins

By optimising your costs in Azure and ensuring you are paying the right price for all of your services, everybody wins. Not only are you helping your company save money, it is good for your career and it could even save jobs in your company. Not to mention turning off what you don’t use is good for the environment too.

Need some help? – Book a call

Please get in touch if you would like to schedule a free introductory 15 minute call for some help in reducing your Azure bill.

Filed Under: Azure, cost-optimization Tagged With: azure, azure billing, azure iaas, azure sql db, cost optimisation, cost optimization

Your Azure SQL Database and Managed Instance is too big

15th January 2021 By John McCormack Leave a Comment

Don’t buy for the future

With the cloud, it’s important to understand the concepts of scaling and elasticity. Your cloud services should be scaled at the right size to handle current workloads, with a small buffer. With Azure SQL Database, there is no need to buy that 32 vCore service, when all you need right now is 4. The rest will be available when you need them. You should be ready to scale up or out when the time is right but not before. With the cloud, you should shift your thinking from a CAPEX operating model to and OPEX operating model.

CAPEX vs OPEX

Capital Expenditure (CAPEX) describes where an up-front purchase is made to provide value into the future. e.g. A Server. Once it is purchased, you don’t need to buy it again and can use it until it outlives is usefulness. It’s a big upfront expense to provide long term value. For something like a database server, you might forecast what size you will need in 5 years and buy accordingly.

Operational Expenditure (OPEX) on the other hand describes ongoing costs and operating expenses. Items such as electrical power, rent and staff salaries come under OPEX. Pay As You Go cloud solutions fall under operational expenditure (OPEX) as they are ongoing costs. With this operating model, you should only pay for what you need, and adjust your scale as driven by demand, not future projections.

Resource utilisation and purchase models

Example graph for Azure SQL Database

DTU Model

Small to medium single databases can use the DTU purchase model which is a blend of cpu, memory and IO. They come with fixed storage per instance type. When monitoring performance, you should be looking mainly at DTU percentage used and storage percentage used.

If you are averaging above 85% and have regular peaks higher than that for DTU usage, you should scale up. You can adjust this figure based on how relaxed or nervous you are the DTU getting pinned at 100%.

If you are averaging below 50% and have few or no peaks above this, you can scale down. (Providing the lower class has enough storage) Again, adjust this to suit your own needs but there is no need to pay double unless you absolutely need to bake in a lot of spare capacity.

vCore

The vCore purchase model provides more flexibility than the DTU model. You can choose your compute and storage resources separately. There is no way to allocate extra memory however, this is bound to the number of vCores you have. Managed Instances only use the vCore purchase model.

I would apply a similar rule to scaling up or down here based on needs. Any instance using < 50% of CPU is a good candidate for scaling down.

Azure SQL Database Service Tiers

Ensure you are on the right service tier. Azure SQL Single DB comes with a choice of Basic, Standard and Premium.

Basic (DTU)

If you are on the basic service tier, then from a cost optimization process there is not too much more to say. It has the cheapest instance types, and the lowest resources so if it is doing a job for you, then there is no need to change. It only offers 7 days backup retention so be aware.

Standard (DTU)

A large number of workloads will be running on standard. If you are on the standard tier, you can scale down based on utilisation however you should bear a couple of things in mind.

  • S2 and below does not allow columnstore indexing
  • S1 instance types and below use HDD storage and this might have an impact on your query durations
    • S2 and above uses SSD

Premium (DTU)

Moving to the premium service tier should be well thought out as it includes the most expensive instances, and may mean that you are over provisioned. If you are on a premium instance, look closely at your utilisation, particularly DTU percent used. For seemingly over provisioned instances, you should also take note of a couple of factors before making a decision. Premium is particularly good for IO intensive queries due to a higher average IOPS and a lower IO latency.

Scaling down from premium service tier means:

  • A decrease in average IOPS from 25 to (1-4) IOPS per DTU
  • An increase in IO latency from 2ms to 5ms
  • A new maximum database size of 1 TB

* Changing instance size but staying with premium service tier affects resources only

If you can handle the drop in instance size and/or reduction in service tier, I would recommend scaling down. Remember to keep a closer eye on newly changed instances, and scale back up if performance is no longer acceptable.

https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-dtu

General Purpose (vCore)

General purpose is the budget option but is suitable for a lot of production workloads. If you are currently on GP and you are happy with your performance, then do not change to a higher service tier.

Business Critical (vCore)

Most production workloads can run in Business Critical but you might be over provisioned. 1 read-only replica is built into the price so you can offload read-only workloads to your replica. It doesn’t come cheap though and is more than double the price of a General Purpose instance due to increased hardware and licensing. Scaling down to General Purpose will take away this replica so beware if you use it. A good option may be to scale to a lower Business Critical Instance, so that you still have a read replica built into the price.

Hyperscale (vCore)

Hyperscale is not currently available with Managed instance. (January 2021)

Your instances on Hyperscale are roughly double the cost of General Purpose but less than Business Critical. The main considerations are database size and log write throughput. On an exceptionally busy and large database, hyperscale is often the best choice.

Be aware of the reduced performance and features available if you think you can scale down to General Purpose. Changing tiers down from Hyperscale is not a native or easy task. It isn’t supported so you will need to export and import your data to a new database.

Scaling down to GP means:

  • Your database must be less than 4TB
  • You will move to premium remote storage, rather than local SSD
  • You cannot continue to use In-Memory OLTP
  • It’s a lot of work

https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-general-purpose-business-critical

Summary

The main thing is to regularly review your instances and make sure you are not paying for too much headroom. Make sure that by scaling down, you still have the functionality you need. If you scale down and performance is not what you expected, you should scale back up. Cost optimization shouldn’t come at the expense of performance. It is about ensuring that you are paying for what you need, and nothing more.

If you missed it, please post 1 in this cost optimization series, please have a read about Azure IaaS SQL Backups.

IT Certification Category (English)728x90

Filed Under: Azure, cost-optimization, front-page Tagged With: azure billing, cost optimisation, cost optimization, managed instance, SQL Database, SQL DB

Azure IaaS SQL Backups – Stop burning money

14th January 2021 By John McCormack 2 Comments

This post is part of a planned series on 7 ways for data professionals to save money in Azure. It’s so easy for costs to escalate in the cloud, and being proactive is the best way to optimise your costs, and make sure you aren’t spending more than you need to. In this post, I look at how to cut costs by implementing an Azure IaaS SQL Backups retention policy and ensuring you are not paying for more than you need.

Azure IaaS SQL Backups

This advice is to specific on-premises or IaaS backups which are going directly into an Azure blob storage account. Whilst blob storage can be reasonably cheap, if you keep every backup with no retention policy, the costs will soon escalate.

Scenario

You have 5 databases on a SQL Server Instance. You take daily full backups of each database on your instance. You also take log backups every 15 minutes as each database is in full recovery mode. This means in 1 week, you will have 35 full backups and 3,360 transaction log backups. This multiplies to 1,820 full and 174,720 t-log backups over 52 weeks. Multiply this for 7 years or more and the costs can get very expensive.

Your company policy only requires point in time restore functionality for 2 weeks of data. On top of that, daily full backups for the previous 3 months and a weekly full backup for between 3 months and 7 years must be kept.

For simplicity, lets assume an average full backup size of 25GB and a log backup size of 100MB.

Costs with no planned retention policy

TimescaleFull backupsLog backupsBlob storage tier
(East US – GPV2 – LRS – Standard
)
Monthly Cost
Previous 7 years (5 DBs)12,7751,226,400Hot£4,784.79 + £1856.92
£6,641.71

Costs with a planned retention policy

TimescaleFull backupsLog backupsBlob Storage tier
(East US – GPV2 – LRS – Standard)
Monthly Cost
Previous 14 days706,720Hot £27.13
£10.42
£38.55
15 days to 93 days3900Cool£110.45
94 days to 2,555 days17580Archive£45.40
TOTAL2,2186,720Mixed£194.40

So if we optimise the storage to only keep the full and t-log backups that we need, and we use cool and archive storage correctly, we can save over £6,000 per month. There are four main ways to actively manage data retention in order that you optimise your cloud costs, without breaching your data retention policy.

1. Azure Blob Storage lifecycle management

Lifecycle management rules are a simple way to manage files within blob storage. You can either delete files or move them to a cheaper storage class. The rules are fairly customisable so this a basic option which works well without too much configuration.

There are huge savings to be made by moving storage to the archive tier. If you’re holding on to backups for regulatory purposes or to adhere to company policies but won’t be using them regularly, the archive tier provides great value for money. Your data may take several hours to be available when you make a request so it’s not suitable for any disaster recovery scenario but providing you don’t mind waiting, you will cut your Azure bill substantially.

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal

2. SQL Server IaaS Agent extension

If you have SQL Server running in Azure VMs, you can opt to have them automatically backed up by the SQL Server IaaS Agent Extension. Whilst there is some customisation available, you can’t retain your backups beyond 30 days. This makes it unsuitable for our scenario here but if all you need is 30 days or fewer, then this is a great option. Having Azure automatically handle the backups, gets you one step closer to Platform as a Service whilst still retaining the flexibility or running a full SQL Server instance.

sql server iaas agent extension backup

Automated backups for SQL Server 2014

Automated backups for SQL Server 2016+

3. Azure Backup (SQL Server VMs)

You can use Azure backup, via recovery services vault to configure backups on multiple servers. You can choose to back up specific databases or all, including any new ones added after configuration by selecting AUTOPROTECT. There are very detailed retention options which allow you to set a very fine grained retention policy.

4. Roll your own solution

This isn’t needed as much as it used to be. It used to be the case that the lifecycle rules were very inflexible and to keep certain backups, and change storage options, you needed to use your own script. My preferred choice was using PowerShell and Azure Automation Runbooks. With one client, this solution saved hundreds of thousands of pound per year.

* I would only recommend this if the two options above don’t provide you with the flexibility you need to manage your storage retention, according to your own complex requirements.

P.S Don’t forget backup compression

With all of these solutions, don’t forget to enable compression on your backups. This will help you save even more money in addition to the retention policy you have just put in place.

IT Certification Category (English)728x90

Filed Under: Azure, cost-optimization, front-page Tagged With: azure billing, backups, cost optimisation, cost optimization, sql backups

  • 1
  • 2
  • 3
  • Next Page »
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

John McCormack · Copyright © 2023