Monday, July 20, 2020

Checking Azure SQL DB service tier via T-SQL

If you have to manage or work with an Azure SQL database and want to know what service tier the database is currently operating, but you don't have access to the Azure subscription or CLI.

Or you want to know the status of the service tier after a scale up or scale down, directly from the database; you can do it via T-SQL

Just query the sys.database_service_objectives DMO to obtain this information, this will give information about the service tier, and also will tell you if the database is part of an elastic pool or not.

Basic usage (in the context of the database you need the information):

SELECT * FROM sys.database_service_objectives;

This will return the following information for the current database:

I am running on a General Purpose tier with 1 VCore

What if you want the information for all the databases created?

Just change the context to the master database and execute the following query:

    , D.create_date
    , SO.edition
    , SO.service_objective
    , SO.elastic_pool_name
FROM sys.databases D
INNER JOIN sys.database_service_objectives SO
    ON D.database_id = SO.database_id;

Then you will be able to see each database tier and related elastic pools (if configured).

You can always change your Service objective via T-SQL issuing the ALTER DATABASE command.
More information here.

Monday, March 30, 2020

Administration Basics: Point in time recovery

Nowadays, data is a precious asset for companies today. If you are a database administrator (by decision or by mistake) or simply you are the "IT guy," you have the mission of guarantee all the data is backed up and accessible for recovery.

Trust me, even when you could think you have the more reliable hardware on the planet, or you have multiple database replicas around the globe, anything can happen (a user deleting an entire schema by mistake, an application updating the wrong records, some process crashing, a lot of things can happen).
So trust me and don't question me, just backup all your databases regularly.

The Backup/restore strategy is a broad topic, so for this post, we will focus on the restore activity, so if you don't know how to back up a database yet, you can start here.

What do we need to know?

I like to think of the SQL backups like a pyramid, like this:

Pyramid of backups

For restoring your database to a particular time, you must perform a series of restores:
  • Full backup BEFORE the date you want. 
  • If available, the closest differential backup (this can be omitted if your diff backup is after the date you want or you don't have it).
  • Each T-log backup in order until the required date (we will see an example later).
It is essential to know that only the FULL database recovery model supports point-in-time recovery; with Bulk-logged, you are only allowed to restore the entire transaction log backup.

Set up our example

For this example, I have created a database named DB_RestoreDemo in SQL 2019. On it, I have a table called dates_demo with no relevant information, just a date and a varchar field for this example.

USE [DB_RestoreDemo]

CREATE TABLE [dbo].[dates_demo](
 [id] [bigint] IDENTITY(1,1) NOT NULL,
 [date] [datetime2](7) NOT NULL,
 [text] [nvarchar](100) NOT NULL,


I populate the table with some records (save it as we will use the same insert later):

USE DB_RestoreDemo

INSERT INTO dbo.dates_demo

GO 50

Then take a full backup:

DISK = N'C:\SQLServer\DB_RestoreDemo_full.bak' 

After that, execute the INSERT statement above, but replace the number after the GO with a significant number, so it takes a lot of time to complete, in my case I will execute it 10 million times:

While it is running, open a new query window and perform some Differential and Transaction log backups as you prefer (in any order and as many times you want), just remember to change the filenames each time to have them quickly identified:

-- Differential backup
-- Remember to change the filename each time

TO  DISK = N'C:\SQLServer\DB_RestoreDemo_diff_1.bak' 

-- Transaction Log backup
-- Remember to change the filename each time

BACKUP LOG [DB_RestoreDemo] 
TO  DISK = N'C:\SQLServer\DB_RestoreDemo_1.trn' 

Once you are happy with the backups taken, you can stop the INSERT query or let it run until it finishes... depends on you.

We have what we need for our example now.

Let's do it!

Open the table and select a random record, we will try to restore the database as close as possible to the date of that record.

FROM dbo.dates_demo
ORDER BY [date];

I have selected this random record; we must guarantee that the last record of the table is this one (or very close in case the millisecond resolution include other records):
Id: 613834, date: 2020-03-30 01:49:50.5366667

Then, go to the folder where your backups are located, and check the modified date of the files, you have three options depending on which record you have selected:

  • Option 1: The time you want to restore is before the full backup taken: Congratulations! You know what data loss is now, good thing this is a demo. You cannot go to that point in time if you don't have the required backups.
  • Option 2: The date you want to restore is after any backup was taken: You can make another transaction log backup.
  • Option 3: The date you want to restore is in range, just locate the file with the closest creation date after the date you want.

In my case, is this file:


Based on the pyramid we saw before, this must be my restore strategy:

  1. Restore 1 FULL backup - DB_RestoreDemo_full.bak
  2. Restore 1 DIFF backup - DB_RestoreDemo_diff_1.bak
  3. Restore 1 Transaction Log backup - DB_RestoreDemo_2.trn

We will restore these backups to a new database DB_RestoreDemo_NEW, so we don't overwrite existing.

Please note that except for the last one, all restore operations must be done with the NORECOVERY option.

For the last transaction log file restored, we will use the STOPAT option; this option allows us to restore a database to a specific date (our example), a particular LSN, or a transaction mark (we will see that on another post).

NOTE: for this example, we used only INSERT statements without explicit transactions, but in a real-life scenario, the STOPAT option with a date parameter will restore the database to the latest committed transaction to that point in time.

 Once we have our restore strategy in place, we create the restore scripts:

USE [master]

RESTORE DATABASE [DB_RestoreDemo_new] 
FROM  DISK = N'C:\SQLServer\DB_RestoreDemo_full.bak' 
WITH  FILE = 1,  
MOVE N'DB_RestoreDemo' 
TO N'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\DATA\DB_RestoreDemo_new.mdf',  
MOVE N'DB_RestoreDemo_log' 
TO N'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\DATA\DB_RestoreDemo_new_log.ldf',  


RESTORE DATABASE [DB_RestoreDemo_new] 
FILE = N'DB_RestoreDemo' 
FROM  DISK = N'C:\SQLServer\DB_RestoreDemo_diff_1.bak' 
WITH  FILE = 1, 


-- Because of the resolution on the decimals in datetime type
-- the increments are .003, .007 and .000, so the closest one
-- to the date we want .537 is .540
RESTORE LOG [DB_RestoreDemo_new] 
FROM  DISK = N'C:\SQLServer\DB_RestoreDemo_2.trn' 
WITH  FILE = 1, STOPAT = N'2020-03-30T01:49:50.540';


 We navigate to the restored database, and if everything is ok, our selected record should be the last one (or very close to):

We have successfully restored a new database to the required point in time. A few things to have in mind:

  • You can specify the STOPAT option with a LSN or a transaction mark; this is useful if you have a critical process you can write your application to "mark" it for easy restore.
  • You just need the latest FULL and DIFF backup to the required date, but need all the TLog backups after that DIFF (or full if you don't have it).
  • In case the date you want to restore is included on a DIFF backup, you will need to ignore that DIFF backup and restore all the TLOG backups since the last FULL, including the first one after the DIFF backup.
  • For Bulk-logged recovery mode, you cannot perform point-in-time recovery, but you can perfectly restore log backups, but the entire content is restored.

And remember to always test your backups by restoring them regularly, with this you can achieve three things:
Confidence in the restore process, this will help you to remain calm under a stressful situation.
You can prepare the scripts you need to do it before anything bad happens.
You can estimate the time you need to do it, and check if that aligns with your RPO and RTO.

Tuesday, February 4, 2020

Introduction to StudyYourServer

 I made a video (in Spanish) on how to use and which features offer my open-source project StudyYourServer

You are free to collaborate, fix it and add new functionalities!

Friday, January 31, 2020

New Youtube Channel

Last year I noticed I was blogging way less than I should, in part because of new projects: work, personal stuff, local community activities, university projects... you name it.

As for 2020's goals, I want to be more active on my blog and try new stuff as well. One of the new things I am doing is starting to record videos (my first experience doing this, even when I have presented a couple of webinars, this is a whole new skill to learn).

According to some studies (don't ask me the accuracy of those), young people prefer videos over text, so I am giving it a try!

What am I using?

For now, my setup is very simple (and cheap):

  • Blue Yeti microphone - Best mic to create videos or do webcasts (check reviews), connect and start using it.
  • Logitech webcam C615 - not sure if I want to record myself!👽
  • Logitech G231 headset - Cheap but excellent sound and response, very comfortable also.
  • Camtasia Studio - To screen record and video editing, I think everybody uses it anyway.
  • My Dell laptop - That works just fine for what I need, no super fancy or powerful Mac required at this point.
  • Samsung Monitor - To be able to record screen and see my notes
  • Microsoft  Wireless desktop 3050 - Keyboard and Mouse, nothing special here, just cheap and do their job. One cool thing is it has keys at the top where you can program custom actions or key combinations (I have key combinations and custom apps mapped to them)
  • Logitech Spotlight- Not needed, but useful if you want to stand up and want to change slides. I feel more comfortable recording while standing up, it feels more natural for me, maybe when I have more experience I can sit down.

A picture of my setup:

Need a better desk! hahaha

What is my plan?

For now, learn how to create & edit videos, Camtasia has some tutorials to get started where you can learn the basics on how the product works.
They offer some free courses as well, to learn how to create quality content.

After that, creating short videos with particular topics related to data platform and cloud, and see how people respond to them. Maybe create a series of tutorials in the future.

I plan to do the videos in Spanish for now since it is my mother language, and I want to reach our local user group. Will consider to add English subs or mix English/Spanish.

Ok, but once you have your videos created, you have to share them somewhere, I have chosen Youtube because of its simplicity and popularity, also you can upload to the platform directly from Camtasia.

My youtube channel

I choose to create a new channel with my name, I had a previous one I created some years ago, but the name was not so professional (if you know what I mean! 😂)

I plan to do one series of short videos (around 1 minute) called "SQL en un minute" (SQL in a minute), and another set for longer videos dedicated to tutorials or product reviews.

You can visit it here (you are encouraged to subscribe! 🤓)

And this is my first video, let me know what you think!

Friday, January 3, 2020

T-SQL Basics: using OUTPUT

You are working on a database development project, and you need to obtain the records before/after an INSERT, DELETE, UPGRADE or MERGE Statement to present it to the user or do any work with that data.

A common approach that I see a lot and involves a lot of code to obtain the before/after information can be the following (maybe some IF and TRY...CATCH also) :

DECLARE @id INT = 1;

SELECT PostalCode
    , ModifiedDate
FROM Person.Address a
WHERE a.AddressID = @id;

UPDATE Person.Address
SET PostalCode = 95012, 
    ModifiedDate = 'Jan 02 2020'
WHERE AddressID = @id;

SELECT PostalCode
    , ModifiedDate
FROM Person.Address a
WHERE a.AddressID = @id;

The results will be something like this (before and after):

And the execution plan will look like this, each extra SELECT statement adds more workload to our server (independently of the resources used) :

Even when the code is easy to read, but if you use this pattern over all your codebase, maintain it can become difficult if you have to change object names or implement it on another system.

T-SQL language provides the OUTPUT clause, that allows you to retrieve information from a DML statement in the same batch.

NOTE: OUTPUT clause will return rows even when the statement presents an error and is rolled back, so you should not use the results if any error is encountered.

Sample usage is as follows:

-- Values After INSERT INTO --
VALUES (@a, @b);

-- Values After INSERT FROM --
SELECT * FROM <MyOtherTable>;

-- Values Before/After UPDATE --
UPDATE <MyTable>
SET <myField> = @value
 DELETED.<myField> as [ValueBefore],
 INSERTED.<myField> as [ValueAfter]
WHERE id = @key;

-- Deleted records After a DELETE --
DELETE <MyTable>
WHERE id = @key;

If we refactor the first example to use OUTPUT we obtain this code:


UPDATE Person.Address
SET PostalCode = 95012, 
    ModifiedDate = 'Jan 02 2020'
WHERE AddressID = @id

Less code and also easy to follow and maintain, and the results will be something like this:

What if we take a look at the execution plan? we can see that this time just one sentence is executed (as expected):

Of course, performance can vary depending on how many rows are you retrieving back to the user, current workload, index and table design that is very own to your environment, but the more you know, the better you can adapt each situation to achieve the optimal performance.

Speaking of which, what if we have a lot of fields or records to work with, it is possible to store the output results on another table? of course, you can, as we can see in this other example:

-- To review the correct data has been deleted
-- we can output to a temp table
FROM Person1
INTO PersonBackupInfo
WHERE EmailPromotion = 0;

-- we can review/recover data if needed
FROM PersonBackupInfo;

We can implement a deleted data review/rollback logic to our application to protect it from user errors (the typical Ooops queries) and protect our databases from an unnecessary point in time restores.

Like everything in life, nothing is perfect, and there are some remarks you should consider to implement it on your applications, so be sure to read it before using it.