Tuesday, September 11, 2018

Pride and Prejudice... and Triggers

This post is part of #tsql2sday event :)
I know, in the world of the database administrators and developers, the triggers are the ugly duck of the database development.

They are hard to track, hard to maintain, can have a weird transaction behavior if you are not careful and can lead to blocking your entire database if they are poorly programmed (yes, this is the prejudice).

But as anything, if treated well and programmed carefully they can be handy in a lot of special situations, but read again... I put SPECIAL SITUATIONS, let me tell you 2 histories I have with triggers, a good one and a bad one:

The Good

Some years ago, more than I want to admit, I was working on a local bank,  there was a requirement from the banking regulation authority, they ask each bank to report any currency exchange operation in near real-time (a maximum of 30 minutes to report an operation, I cannot remember exactly, but around these times) and the implementation time was some kind of short.

From a database design, there was a lot of tables involved from different types of banking transactions, also, for changing all the source code for all transactions and test all of them to provide a good solution could take a long time to complete, so instead, we choose to use triggers.

Since everybody wants their money, and want it fast, my solution was to just insert the raw data via triggers from all involved tables to a single staging table without performing any validation, so additional overhead can be avoided to any transaction. Then a second process scheduled to run every 5 or 10 minutes collects all the data from the staging table and perform the required validations and formatting, then the transactions were processed and sent to the reporting interface asynchronously.

This process executed for years and was deployed in a fairly quick time without disrupting the existing program logic.

The Bad

Because not everything goes as expected, in another job, we have to migrate a SQL Server from an old version to a newer one, and we had enough time to optimize/change stored procedures and user code to use new features and run faster on the new server.
Everything went smoothly during development and user testing... until we go live... We did not include in the release some triggers that were on some tables on the original server, these triggers perform some extra validation for some special cases on business transactions, since we optimize some of the tables and processes, just deploying the scripts won't work.

That time I spent almost 4 or 5 days living in the office, first solving the issues for already existent transactions and then trying to implement the trigger logic on the stored procedures itself, luckily our team could implement the special cases without using triggers and we weren't fired after that :)

As with the Jane Austin novel... several love/hate histories can be written about triggers, and the only way you can determine if you love or hate them, is via personal experience, get rid of the prejudice and give them a try, maybe they can work for you!

Monday, September 10, 2018

Different methods to generate query execution plans

One of the things I enjoy about my job is when I have to develop or review some T-SQL code, for my own projects or reviewing someones else code. In both cases, you often have to tune some queries or review them to see if there is room for some performance improvement.

An Execution plan is the method SQL Server engine will choose to execute (or would use if is an estimated plan) any given query based on the object indexes and statistics, server load, parameters, tables involved, operators, conversions, to provide the optimal execution time and the minimum resource usage. For human analysis, these are stored on .sqlplan or .xml extensions so you can save and analyze them later or on another machine from where it was generated.

For this type of analysis, you need to check execution plans, and it is important that you know how to generate and save them to be shared or for later analysis. On this post, we will discuss a wide range of the options you have to generate and save an execution plan.

1. Using SSMS

Just select the Include Actual Execution Plan (Ctrl +M) on the toolbar as follows:

To save the execution plan

Right Click on the execution plan and select Save Execution Plan As...

2. Using SQL Server Operations Studio

When executing a query select the Explain option at the right:


This method represents the execution plan in a text format, using rows in a hierarchical tree representation.

Useful when you don't want a graphical interface or you are using a command line tool.
To enable it, just execute the following statement at the beginning of your code, and set if off once done:




Please note that using this option, the query is not executed, only the execution plan is generated.
The results look something like this:


Similar to the previous method, this one displays the execution plan in a text format, using rows in a hierarchical tree representation.

This option also does not execute the query and provide us with more detailed information than the showplan_text, also useful when using a command-line tool.

To use it, just use the following T-SQL code:


-- Your query goes here!


And these are the results:


This method generates the execution plan in an XML format (the .sqlplan files are stored as XML) and when generated from a graphical interface, like SSMS, if you click on it, it will display it graphically.

As with SHOWPLAN_TEXT and SHOWPLAN_ALL, this option does not execute the query, only generates the estimated execution plan.

To use it, just use the following T-SQL:


-- Your query goes here!


And the output looks like this:

The previous options are more suited when you have access to the actual query, and you are developing/tuning a particular statement, but if you need to generate plans for previous queries executed on the servers you have 3 options: DMVs, Extended Events, and QueryStore:

6. Using Dynamic Management Objects

Using the DMF: sys.dm_exec_query_plan, you can generate execution plans for any active session or sessions in the cache for your SQL Server instance, the output is given in an XML format that you can use in SSMS or save for later analysis.

This is a sample usage of the DMF using info from the cached plan with the DMV sys.dm_exec_query_stats, you can use another object if you want:

FROM sys.dm_exec_query_stats AS qs  
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) ph;

This is the output of the query:

7. Using Extended Events

In this post, I explain how to create an Extended Event, just create your session and select any event you want to track.

In the global fields to capture, just select the event Plan Handle:

With this field, you can use the DMF sys.dm_exec_query_plan to obtain the execution plan.

8. Using Query Store

Since this option is not available in all SQL Servers, it was introduced in SQL Server 2016 and also you have to enable Query Store, I left this option as the last one, but it is also important to know how to obtain an execution plan from it.

If you have Query Store already enabled on your database, just navigate to your database and select the Query Store folder, in there expand it and select any of the reports you want to analyze, for this example we will use Top Resource Consuming Queries:

The bars will show you the most resource-intensive queries in descending order, so just select any bar and you will see the current execution plan used at the bottom.

We will not discuss all the capabilities of Query Store on this post, but if you have more than one execution plan for a query, the different execution plans can be selected on the circles at the right side:

We have discussed a lot of different options you have to generate execution plans. In some later posts, we will use the info obtained to tune queries.

For further reading about execution plans, you can access this link:


Wednesday, August 22, 2018

Format Query as JSON

JSON (JavaScript Object Notation) is a file format used to transmit data from various applications, very similar to XML, it also used to stored NoSQL unstructured data, and because of this versatility, Many REST applications and web services use it.

Since this is a standard file format, you should be able to generate data in JSON format from SQL Server.

Fortunately, since SQL Server 2016, there is a native way to do it, in this post I will show you how to do it.

Generate a simple JSON file

The most basic syntax to generate a JSON file is this:

SELECT <field list>
FROM <YourObject>

Using this simple test table with 10 records as an example:

FROM [dbo].[MyTestTable]

If we use JSON AUTO on this sample data we will have the following output:

FROM [dbo].[MyTestTable]

Query executed
Part of the JSON AUTO output

Using dot syntax

For most of real-world applications, the JSON AUTO will not give you the control you could need over your file format, for having more control over it, you must use the JSON PATH option, along with the ROOT option as follows:

FROM [dbo].[MyTestTable]

This will generate the following output:

JSON PATH output

And if we want to group similar items (for example for queries with joins), just rename the fields as element.field as in this example:

 dataVarchar as [group1.D1],
 dataNumeric as [group1.D2],
 dataInt as [group2.E1],
 dataDate as [group2.E2]
FROM [dbo].[MyTestTable]

Will generate the following output:

JSON PATH with grouped items

Of course, if you have SQL Server Operations Studio you can do it from the IDE:

If you want to learn more about the FOR JSON option, please read Microsoft official documentation here

Saturday, August 18, 2018

Performance Basics: Indexed views

Views help our query writing by simplifying writing the same sentences and/or aggregations over and over again, but it has a drawback, the views just store our query definition, but the performance is not improved by using them.

Since SQL Server 2008, the option to create an index over a view was introduced, of course, there are some limitations, but if your view can use them, the performance improvement could be great!

I will show you how to create a simple index over a view

Base query

I have created a test table called Aggregate_tbl1 with over 12,000,000 rows of random "money" data and random texts as well, as you can see in the following images:

with this table, let us proceed to make a simple aggregation query and create it inside a view:

CREATE VIEW [dbo].[Aggregate_tbl1_VI]
 SUM(ValueMoney) AS total, 
 [Description] AS descr
FROM dbo.Aggregate_tbl1
GROUP BY [Description]


If we execute this view with a single where and check the execution plan we will see something like this (no indexes):

we can see that an index scan and some computes are performed, so what happens if we just add a simple index in the base table over description and include the valueMoney field?

If we execute the query again, we can see some improvement, and index seek is performed now, but still, there is an aggregation operation costing 7% of the total operation and please take a look on how many rows are read (22397) to compute the total

Creating the view and the related index

We proceed to create an index over the view we created previously, of course, we have to drop it first, and then recreate it with the SCHEMABINDING option, and since we are using a GROUP BY, we also must include the COUNT_BIG(*) sentence:

DROP VIEW [dbo].[Aggregate_tbl1_VI]

CREATE VIEW [dbo].[Aggregate_tbl1_VI]
 COUNT_BIG(*) as [count_rows], 
 SUM(ValueMoney) AS total, 
 [Description] AS descr
FROM dbo.Aggregate_tbl1
GROUP BY [Description]


Now we proceed to create an index over the view (as any other index)
Just note that the index must be a unique clustered one, that is because a group by is used in our example, to guarantee the uniqueness of the rows of the view:

    ON [dbo].[Aggregate_tbl1_VI] (descr)

Now, let us proceed to execute the query again and see how it behaves:

We can see that now the newly created index is executed, and the aggregation operation lowered its cost, and from reading 22397 rows it decreased to 1 row, as we can see in the details:

With this simple example, we could see that we had a big improvement on the performance of the query, but as I always tell you, test any change on a dev server first!

If you need more information about the special requirements to create an index over a view, please visit the Microsoft official documentation here

Thursday, August 16, 2018

Announcing new Guatemala SQL Server user group

I am very happy to announce that after a long time struggling and filling all the requirements, we were able to create the SQL Server local user group for Guatemala.

The purpose of creating this group was to empower the local database administrators and developers by creating a community where all of us can benefit from the knowledge or personal experiences from any particular topic, this will become us better professionals and why not? have better jobs and salaries in the future.

We want to focus our community as a roundtable so any of the members (old and new) could be an assistant or speaker so we will encourage the participation of all the members.

What are the contact details?

You can access the main site at gtssug.pass.org and we encourage to register now!
The meetings and related content will be published in there.

Also follow us on social Media for news and database related content.

Twitter: @gtssug
Facebook: facebook.com/groups/gtssug

Who is leading this group?

This group was initiated by great professionals I have the privilege to met (and me!).
these are Carlos Robles, Christian Araujo, Carlos Lopez, and myself. You can learn more about us here.

Please note that we only are the coordinators, in fact, anybody can participate as a volunteer or speaker!

Where will be the meetings?

We want to thank Atos Guatemala for providing us a place to have our meetings, you can learn more about this awesome company here

Atos is located in a convenient place at Distrito moda in Ciudad Cayala, so there are no excuses for not assist to any meeting of your interest!

Paseo Cayalá Oficina 401,
Edificio H Distrito Moda,
Zona 16, Guatemala

And remember, please register here to receive the latest news and related material.
We want you to be part of this rising community!

Thursday, August 9, 2018

Determine SQL Server Network Protocol Information using T-SQL and DMVs

I have recently started as an author for MSSQLTips.com website, this is an excellent site where you can start writing if you don't want to maintain a blog, and you acquire a lot of visibility right away since they have a lot of traffic to their website.

On my first post with them, I explain how to Determine SQL Server Network Protocol Information using T-SQL and DMVs

You are able to read the instance configuration registry information from SQL Server itself using DMV  


and I also share a few examples on how you can obtain network information easily.

Take a look on the full post and let me know what do you think:


From now I will mix my posts between this blog and as an author for that website, but i will share all posts there so you will keep track of everything!

Use the MSSQLTips.com label to check all the tips I have published.

Monday, July 23, 2018

Extended events (XE): Create a simple XE

When someone says still uses SQL Profiler.
Image is taken from here
An ancient, dark place, a source of many legends and histories, although some people have been able to get out alive, some others are stuck there, the locals refer to it as "The nosy one" but we know it for another name: SQL Server Profiler

SQL Server introduced Extended Events since 2008 version, and was meant to replace SQL Server profiler traces (that will be deprecated in the near future), and provides a lightweight, customizable event collection platform, and you should be using it now.

What is an Extended Event?

According to Microsoft official documentation, is a highly scalable and configurable architecture to be able to identify and troubleshoot any performance issue, it consists of:

Packages: Contains the required objects to collect and process data.
Sessions:  Process to be executed to collect the data
Events: Info to be collected and later analyzed.
Actions: Fields common to all events.
Predicates: Filters to be applied when capturing data.
Targets: Where the collection results are stored.

We will make a very simple example so you can see how it works and you can start using it with confidence.
We just will collect all users sessions from our instance, along with the SQL text if exists.

Using the wizard

Just open SSMS and open Management > Sessions > New session wizard.

Specify the name you want for your session, also you can enable the session to start at server startup if you want.

You can select a template for a predefined set of events to achieve common tasks, but for this example, we will not use a template to keep it simple.

In the event library, search and select Login, in the description we can see when this event is fired: Occurs when a successful connection is made to the Server. This event is fired for a new connection or when connections are reused from a connection pool

Then in actions, select the data we want to capture, for this example: database_name, session_id, sql_text, username

In the next window, you can add predicates to filter the info you collect, to avoid over-collecting data and end with huge logs, for this example we will not apply any filter.

Then you select the target, or where you want to save your data. For ad-hoc or quick traces, you can use the ring buffer, and for large collects use the save to file option.

Review the options you have created and also you can script the trace if you want to reuse it.

You can start the session to run when the wizard ends, also see the data being collected.

Using T-SQL

We can achieve the same results running the following T-SQL

--Event Session

--Events to track
ADD EVENT sqlserver.LOGIN (

-- If you want to filter events
WHERE ([sqlserver].[nt_user] = N'domain\user_to_filter')
-- TARGET to use, 
-- just choose Ring Buffer or Filename and comment the other

-- Ring Buffer
ADD TARGET package0.ring_buffer
  MAX_MEMORY = 4096 KB

-- Filename
ADD TARGET package0.event_file(SET filename = N'User Sessions')
  MAX_MEMORY = 4096 KB
-- To start the session

To watch the data

Just right-click on the new session created and select the Watch live data option

You can see some events already being recorded corresponding to the current session (in case you didn't apply a filter)

To test it, we just open a new window and connect with a different user.

We can see now the record in our session. If there is a query associated, we will be able to watch it also.

This very basic example will allow us to show more complex examples and how you can do cool thing using extended events.