Saturday, February 16, 2019

SQL Saturday 282 - T-SQL Basics: Coding for performance

A great experience!
Thanks to all the atendess to my session about T-SQL, for being my first time as a speaker for a SQL Saturday it was good!

As I promised, the presentation and session material is available at the following links:

SQLSaturday #282 site:
(Please evaluate my session if you attend)

My personal GitHub:

Some pictures from the event:

SQL Server local users group board!

Monday, February 11, 2019

Quick tip: Speeding up deletes from SSIS execution log

If you have SQL Server Integration Services installed on your server, and you left the default configurations a table named sysssislog is created on MSDB database, it contains logging entries for packages executed on that instance.

If you are not careful enough, this table can grow uncontrollably over time and can make subsequent insertions very slow.

A proper deletion process must be put in place, so you not get into situations like this one in your msdb database:

If you are already on this situation, you can the following T-SQL Script to delete records by batches:

DECLARE @date_del datetime,
  @batch_size int = 1000, -- will delete on batches of 1000 records
  @RowsAffected int =1

-- Time to keep in the history, in our case 1 month
SET @date_del= DATEADD(mm,-1,getdate()); 


WHILE (@RowsAffected >0)
 DELETE TOP(@batch_size) 
 FROM [dbo].[sysssislog]
 WHERE starttime < @date_del;

 SET @RowsAffected = @@ROWCOUNT;

 -- If you want to know rows affected, uncomment this:
 -- PRINT @RowsAffected;


After that you can implement the same query to your msdb maintenance job to have all in one place.

Tuesday, February 5, 2019

I am speaking at SQLSaturday Guatemala 2019

I’m very thrilled to announce that I will be participating as speaker in this year’s SQL Saturday #828 event in Guatemala city!
This will be my first time as a speaker on a SQLSaturday.

Event will take place on February 16 at Universidad Francisco Marroquin, Calle Manuel F. Ayau (6 Calle final), zona 10, Guatemala

Here are the details of the session I will be presenting (at 3:15 PM CST in Dev Room)

T-SQL Basics: Coding for performance

It is very common in the IT field for a developer to switch to database developer or administrator, even when the programming concepts are the same, the skillset required to code T-SQL is different.
In this session, we will learn some basic tips to improve our code and improve database performance from early application planning stages to already deployed applications.

We will also see some demos about:
  • Compatibility level and deprecated features
  • Filtering basics: SARGABLE arguments
  • Covering indexes
  • Indexed views
  • Implicit conversions
  • Memory Grants
  • Joining records with NULL 
  • DMOs to find top resource intensive queries
  • Collation: considerations when working with multiple databases

I will show you execution plans using an excellent tool called Plan Explorer from SentryOne, best thing is that is free.
You can download it from here

As any SQL Saturday event organized by SQLPass you can register for free, it takes less than 5 minutes to get in and sign up.:

Hoping to see you there!

Wednesday, January 30, 2019

Understanding and working with NULL in SQL Server

Graphic representation of the difference between 0 and NULL
Image taken from
According to database theory, a good RDBMS must implement a marker to indicate "Missing or inapplicable information".

SQL Server implements the lack of value with NULL, that is datatype independent and indicates missing value, so the logical validations are different, this is better known as Three-Valued Logic, where any predicate can evaluate to True, False or Unknown.

I see a common error, referring to null like "Null values" but the correct definition of null is "Lack of value" so you must refer to it as null, in singular form.

On this post, we will learn how to work with null in SQL Server.

Declaring and assigning NULL

For working with null, SQL Server engine uses the reserved word NULL to refer to it.
It is datatype independent so you just have to assign it to any variable or field, using the equal operator =, as you can see on this example:

DECLARE @NVi as int = NULL;
DECLARE @NVc as nvarchar(30) = NULL;
DECLARE @NVv as sql_variant = NULL;

SELECT @NVi, @NVc, @NVv;

If we run the statement, we will obtain these results, the same for each data type, as expected.

For inserting and updating fields with NULL we do it like on this example:

-- For inserting values
INSERT INTO test1..Table_1 (column_char,column2)

-- For updating values
UPDATE test1..Table_1
SET column2 = NULL;

Be careful when working with null, the equal operator = is only used for assignment.

Operations and comparison against NULL

As we stated earlier, any predicate or comparison can evaluate to TRUE, FALSE or UNKNOWN, so when a value is unknown we don't know if it is true or false, so comparing or working any value with unknown is also unknown.

For example, the following operations result is NULL in all cases:

--Arithmetic operations
SELECT NULL+5,NULL-3.47, NULL*3.1416, NULL/0; 


--String operations



--Date operations



When comparing to null, we also obtain null as well, as in those examples, as you can see, even comparing null to null is unknown, and when we execute below code, we obtain NO for all:

--comparing to 0
IF(0= NULL) OR (0 <> NULL)

--Comparing to empty string ''
IF(''= NULL) OR (''<> NULL)

--Even comparing to another null

So, if we want to compare value or column and check if is null or not what must we do?
SQL Server implements the IS NULL and IS NOT  NULL to compare against null, usage is as follows:

-- IS NULL usage

FROM test1..Table_1
WHERE column2 IS NULL;

-- IS NOT NULL usage

FROM test1..Table_1
WHERE column_char IS NOT NULL;

-- Using on IF construct

DECLARE @NVi as int = NULL;


-- For Replacing NULL you can use
-- ISNULL Value since SQL 2008


With these tools we are ready to work with null in our databases, so now you should follow some considerations to not impact your database performance.

Special considerations for good performance

As the last point, I would like to give you some tips for dealing with NULL

Prefer IS NULL over ISNULL()

When possible, try to compare predicates using IS NULL, before casting NULL to default values using ISNULL(), because casted values are not SARGABLE.

Take as an example of these two queries, they are equivalent, but the first one has better performance over the second:

-- First query uses an index seek :)
SELECT FirstName
FROM Person.Person
WHERE MiddleName = N'' 
 OR MiddleName IS NULL;

-- Second query uses an index scan :(
SELECT FirstName
FROM Person.Person
WHERE ISNULL(MiddleName,N'')=N'';

These are the execution plans:

First query execution plan, an index seek is used :)

Second query execution plan, an index scan is used :(

We get a warning on the second execution plan
You can see the 2 differences on the 2 plans, so for this case, we prefer to stick to the first one, even if you must write more code.

Be careful with aggregations over nonexistent data

When you perform aggregations, be extra careful with no existent data, even when columns do not allow null, aggregate data that does not exist on the table returns null, contrary to what one could think can be the usual (a 0 value), as you can see on this example:

-- Even when TotalDue field does not allow NULL, 
-- the SUM of noexistent values is NUll, not 0 as one could think

SELECT SUM(TotalDue) as [Total Due]
FROM Sales.SalesOrderHeader

And the query results:

For those cases, you should use the ISNULL() function after SUM.

As I always recommend: test anything before going live, and use default values and not null columns when possible, to make your life easier.

Monday, January 21, 2019 Rookie of the year 2018

You need a lot of hard work for success!
2018 was a good year for me in a lot of areas of my life both personal and professional (the birth of my daughter, my MCSE certification and another MCSA as well, the start of the local PASS Local group, the growth and recognition of this blog, etc).

Also, since August 2018, I started writing technical posts for other sites, and one of them is This is one of the best SQL Server related sites on the world, it contains a lot of technical articles in a "tip" format where you can learn new things or how to fix an issue in your environments.

The new year 2019 has come and new opportunities to continue growing, to continue learning and improve your career. I also have started this year with good news, last week I was awarded "Rookie of the year 2018" by MSSQLTips. This award is won by community, peer and site leaders votes, so I am very honored that many people have voted for me, this is a great honor and a big responsibility at the same time, I cannot put my guard down, so many people have been giving me their vote so I have to continue with the hard work learning and providing high-quality content, not just on that site, but all the sites I write.

I hope to continue writing and be able to be nominated for author of the year 2019, as some of you know, the most difficult part of writing is to find good topics to write about, but I want to focus on things I have difficulties when I started working with SQL Server, so new people starting with databases can have more handy and easy to follow material to get started.

I also want to congratulate my friends Pablo Echeverria and Alejandro Cobar for being also nominated on the site, we have a lot of potential here in Guatemala.

If you are interested, you can see the award here:

Wednesday, January 9, 2019

Source Control using Azure Data Studio and GitHub

You can maintain and deploy code easily
Source control is a vital part of IT work, since it is used to track changes over software and documents, with two main benefits: having a centralized, up to date copy of all your applications and utilities, and to be able to rollback to a previous stable version in case of a failed deployment.

This is perfectly applicable to SQL Server, where you want to have an updated and ready to deploy elsewhere copy of your database objects.

There are a lot of source control applications and software, everyone has its pros and cons, but personally, I like to use GitHub, since it is free to use and since it was recently acquired by Microsoft, support for other products is easier (SQL Server for this case).

On this post, I will show you how to implement a source control for a database using GitHub and Azure Data Studio (ADS).

Setting up your machine

First of all, you need to have Azure Data Studio installed and running on your machine, and also a GitHub account to be able to upload and maintain your files.

You will need Git to be able to use source control features in ADS, you can download from here. After downloading it, proceed to install it on your local machine (default features are ok).

To verify Git is installed correctly, just open any command prompt and type git then press enter:

Configuring Azure Data Studio

To be able to use source control on ADS, we have to perform some simple steps:

Open ADS and click on File>Open Folder, locate the folder you want for work and click OK (can be an empty folder for now).
For this example, I am using C:\Repos\SourceControlDB

You can see that now the selected folder appears on the top bar

Once a folder is selected, click on Source Control, located on the left bar

Click on Initialize Repository

And that is all, you are ready to work with source control.

Working with Git

Adding a file to source control

Now we will add an object to our Git folder to see how sourcing works.
Open any already created database object or a TSQL query on ADS IDE

Save it on your Git Folder with a meaningful name

Once saved, you can see the file is added to the source control as an untracked change

Just select the changes you want to commit (in case you have more open files) and then click on Commit on the upper right corner of the source control area

Since the changes have not been staged, a message box asking to do it automatically, click on Yes

Provide a meaningful commit message and press Enter

Now the file has been added to source control successfully.

Editing the file

We now proceed to do a simple change in the stored procedure

At the moment you save the file, you can notice that a new change is added to source control and modified or added lines are marked on the file

If you click on the change, you can now see the actual changes made to the file

If you are Ok with the changes, commit them using the same option as earlier, and that is the basic Git usage.

We will cover the rollback, merge and some other features on the next post.

Now we will learn how to upload your local Git files to GitHub.

Linking your repository to GitHub the first time

The easiest way is using your GitHub profile, so let's do it...

Create a new repository on your profile with the options you want, and click on Create Repository

In the next step copy the code for pushing an existing Git repository

Open a new command prompt, navigate to your Git folder and paste the code there, one line at a time

In my case for the first line, the origin already exists so it will throw an error, but for your first time it will execute ok, for the second line, after a few moments it will ask you for username and password

After a few moments (depending on your folder size) files will be uploaded and now your repository is uploaded to GitHub successfully (as long with all the versions you have)

You have successfully linked your Git folder with GitHub, as you can see the file is uploaded

Your GitHub is now linked

Since we have done the linking, you can upgrade existing or add new files and they will be uploaded to GitHub when you commit the changes and sync them.

We perform another quick change to our file and commit it

Then we click on Sync to synchronize with GitHub

After the process ends, if you check your GitHub repository you can see the new change

Now you have configured your environment to support source control directly to GitHub. On the next post, we will see some other advanced options, like perform the sync automatically.
Also, we will learn how to pull those files from another place and apply them to other SQL Server instance.

Monday, November 12, 2018

Load and run custom T-SQL scripts in SSMS via Snippets

If you work with SQL Server on a daily basis, it is very likely you have a lot of custom scripts you have to execute frequently, maybe you have stored them on a folder and you open them manually as you need them, or have saved them on a solution or project file, maybe you execute a custom .bat or PowerShell file to load them when you open SSMS...

Every method has its pros and cons, and on this post, I will show you a new method to load your custom scripts on any open query window on SSMS via Snippets.

What is a Snippet?

According to Microsoft Documentation:
Code snippets are small blocks of reusable code that can be inserted in a code file using a context menu command or a combination of hotkeys. They typically contain commonly-used code blocks such as try-finally or if-else blocks, but they can be used to insert entire classes or methods.

In short words is custom code (T-SQL Scripts for us) that you use a lot and you want to have always available to use.
In SSMS there are a lot of code snippets available by default, that you can use right away, let us see one of them in action, we will use a snippet to insert a create table template:

You can see how easy is to load the code in any existing window, even if this already has code on it, so now let us create one with our custom code.

Creating a custom Snippet

These files are stored in XML format with a .snippet extension, you can find the official documentation here.

The basic template is this (you can find the file on my GitHub):

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="">
 <CodeSnippet Format="1.0.0">
 <Title>Your Snippet title</Title>
 <Description>Your snippet description</Description>
 <Author>Author of the snippet</Author>
    <ToolTip>Variable1 description</ToolTip>
    <Default>variable1 default value</Default>
  <Code Language="SQL">
SELECT $Variable1$

Explaining it:

  • You need to insert your custom code inside the <![CDATA[]]> label. 
  • If you have variables, you must enclose them in $ characters ($Variable1$ for our template, and then declaring the variable with the name you use inside the <Declarations> section.
  • Then declare the Script name, description, and author in the <Header> section.
  • Save it with the .snippet extension.
I have a simple script that I use a lot to see the elapsed and remaining time for any session on SQL server, so let us use it for this example, the query is the following (Note: I didn't make this script, but I was not able to find the author, so if you know him/her, please let me know so I can give the proper credit):

 CONVERT(NUMERIC(6,2),r.percent_complete) AS [Percent Complete],
 CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
 CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
 CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
 CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
 CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
 CASE WHEN r.statement_end_offset = -1 
 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))AS [SQL Text]
FROM sys.dm_exec_requests r WHERE command LIKE  '%%' --Text of the Session you want to monitor, like BACKUP or DBCC

For this script, our variable will be the text in the like statement, so after using the snippet template, our code should look like this:

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="">
 <CodeSnippet Format="1.0.0">
 <Title>Running Queries ETA</Title>
 <Description>Display estimated completed time for any given query</Description>
    <ToolTip>Command to look for</ToolTip>
  <Code Language="SQL">
 CONVERT(NUMERIC(6,2),r.percent_complete) AS [Percent Complete],
 CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
 CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
 CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
 CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
 CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
 CASE WHEN r.statement_end_offset = -1 
 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))AS [SQL Text]
FROM sys.dm_exec_requests r WHERE command LIKE  '%$YourCommand$%'

We save it with the .snippet extension to a folder in our computer, the next step is to load it to SSMS.

Loading the Snippet into SSMS

Open SSMS and go to Tools> Code Snippets Manager

The following window will open. If you want to load a complete snippet folder, click on Add, or if you want to load just one snippet, click on Import.

For this example we will load the entire folder where we store the snippet template and the one we created in the previous step, so we click on Add, and then locate the snippet folder:

We can see that the custom Snippets folder has been created and the 2 snippets are loaded.

After that, we can use the custom code snippets on any window we want.

Loading our Custom Snippet

Now you can load your custom code in any SSMS open query window, we will use it to know how much time is taking a DBCC command we are running on the other window:

For more information on T-SQL snippets, visit this link.