Thursday, June 28, 2018

Check instant file initialization and lock pages in memory options

Study your server reports do the work for you!

Instant file initialization and lock pages in memory are 2 windows operating system options that can help us improve our overall SQL Server performance.
These options are already checked in the Server Dashboard report available in my free tools.

We will talk about both options and how to check them.

Note that these options are for SQL Server running under Windows OS, for Linux systems you can ask Carlos Robles via his webpage or twitter. He has a lot of experience working with databases (SQL Server, Oracle, MySQL, Sybase) running under UNIX environments.


Instant file Initialization


When you create any file on the operating system, any data left in disk is overwritten by filling with zeros all the file, that could take some time depending of the file size.
For SQL Server this is done when you restore a backup, perform an auto growth operation, create a data/log file (if you put an initial value of several GB this zeroing could take a long time to complete), so this can impact our SQL Server performance.
To avoid that, the zeroing operation can be skipped leaving the "dirty" data on the disk, improving the speed of file operations.

To check if instant file initialization is enabled


Just check the error log and find for the "instant file initialization" entry


exec master..xp_readerrorlog 0,1, "Instant file initialization"

This will tell you if it is enabled or disabled for your system



To enable Instant File Initialization


Starting SQL Server 2016, this option can be configured at setup by checking the "Grant Perform Volume maintenance task privilege to SQL Server Database Engine Service" option


If the option was not enabled during setup, or for older versions, you have to add SQL Server service account to the Perform volume maintenance tasks local policy on your system.
You can check how to do it here.


Lock Pages in Memory


This option allow any given OS process to keep data in the memory, instead of paging it to disk (virtual memory).
Enabling this option we can improve performance by making SQL Server process read pages from physical memory (of course if the page is available in memory).

To check if Lock Pages in Memory is enabled


Run the following query:


/*Lock pages in memory*/
SELECT
MN.locked_page_allocations_kb
FROM sys.dm_os_memory_nodes MN
INNER JOIN sys.dm_os_nodes N 
ON MN.memory_node_id = N.memory_node_id
WHERE N.node_state_desc <> 'ONLINE DAC'

If lock pages in memory is enabled, the value will be greater than 0.

To enable Lock Pages in Memory


You have to add the sql server service account to the Lock pages in memory policy.
You can check how to do it here.


Also remember, these options are already checked in the Server Dashboard report available in my free tools.

 

 

Sources:

 

https://docs.microsoft.com/en-us/sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-2017

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows?view=sql-server-2017


Thursday, June 21, 2018

What it is Page Life Expectancy (PLE)

Which cup will you prefer to go the less
number of times to the coffee maker?
image was taken from here
Let us imagine this: Every morning when you get to the office, you must drink coffee to be fully awake, if you are a coffee lover, maybe you should have your own personalized 24 oz cup to put it.
Let's say one day you lose your cup and you have to grab one of the tiny-ugly cups available in the office, now you have to put the coffee on it several times to drink the same 24 oz, this means you will have to leave your desk several times.

Something similar happens with Page Life Expectancy (PLE). This is a measure that indicates us how long a data page stays in memory(in seconds) so it can be reused by other SQL Processes without having to read the same page from disk again (which translates in less I/O and less overhead). So with this in mind, the higher the value the better, but, how much is high enough for our system?


Right PLE value for my SQL Server


Long time ago, when servers were not so powerful, the recommendation was to have our PLE higher than 300, but now times have changed and servers too, so the recommendation is to have a PLE higher than 300 for each 4 GB of RAM assigned to SQL Server if you have Max server memory set (or for your server memory if you don't have a maximum set to SQL Server).

Keep in mind that you have to measure this value over a period of time and then average it to have a more consistent value or to identify sudden drops and correlate them with queries executed at that time.

To Calculate it


I take it from performance counters, more specifically from the Buffer Manager counter.
To calculate it I use the following T-SQL (please note that this only gives you the current value):


DECLARE @SQLmemory NUMERIC(9, 2)
DECLARE @Maxmemory INT

SET @SQLmemory = (
  SELECT physical_memory_kb / 1024.0 / 1024.0
  FROM sys.dm_os_sys_info
  )
SET @Maxmemory = (
  SELECT convert(INT, value_in_use)
  FROM sys.configurations
  WHERE [name] = 'max server memory (MB)'
  )

IF (@Maxmemory) <> 2147483647
BEGIN
 SET @SQLmemory = @Maxmemory / 1024.0
END

SELECT [counter_name]
 ,@SQLmemory AS [Server Memory_GB]
 ,cast([cntr_value] AS INT) AS [PLE_Current]
 ,(@SQLmemory / 4) * 300 AS [PLE_Expected]
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Manager%'
 AND [counter_name] = 'Page life expectancy'

What if the value is too low?


This means SQL Server has to flush memory very often. this could be a symptom that your SQL Server is struggling with the currently allocated memory, so you need to tune your Database checking (but not limited to):
  • Check your indexes (missing or unused)
  • Select statements that return data you don't need
  • Tune your queries
  • Check your statistics
  • Check for ad-hoc queries
If all of the above does not work then you should consider expanding the memory of the server.


There are 2 posts I want to recommend to for further reading about this topic:

NUMA considerations for PLE from Paul Randal
Fixing PLE from Steve Hood

Monday, June 11, 2018

Collation considerations to create robust queries

Is your code "This" robust?
What is Collation? In short words: are the sorting rules, case, and accent sensitivity properties for your data. (if you want to read more, click here).


If you are dedicated to database development or you provide support to a lot of user databases, then you must be aware of the collation of the database at the moment to write your queries to avoid execution failures at deploy stage.


On this post, I will show you some of the most common errors related to collation and how to avoid them to have solid T-SQL queries.


1. Your DEV machine (all defaults)


If you just install SQL Server in your dev machine and use all the defaults, including collation, it is very likely you won't have any issues of this kind, so let us take the following TSQL as an example:

USE School;

--Current database and master 
SELECT name, collation_name FROM sys.databases
where database_id = DB_ID() or database_id =1; 


SELECT DEP.* 
FROM Department dep

It will execute without any issues:


So now, let us change the Collation for the School database:


USE [master];

ALTER DATABASE [School] COLLATE SQL_Latin1_General_CP1_CS_AS;


Collations are different now

After this change, the fun begins...

2. The column prefix '<something>' does not match with a table name or alias name used in the query


Executing the same tsql code, now with a different collation will raise an error:



To solve this, verify that the case on the alias, object and properties are the same, and change it to be the same across all your query:




3. The multi-part identifier "<something>" could not be bound


Our second error, if we add a WHERE clause to the query, again with a different case, an error will show:



To solve this, as the previous error, verify that the case on the alias, object and properties are the same, and change it to be the same across all your query:



4. Cannot resolve the collation conflict between "<Collation 1>" and "<Collation 2>" in the equal to operation


For our third error, we will use a different query that use tables for different databases (and maybe different collations):


SELECT QUOTENAME(SU.[name]) AS UserName 
          ,QUOTENAME(SP.[name]) AS PrincipalName 
FROM sys.sysusers AS SU -- Table located on the database
LEFT JOIN sys.server_principals AS SP -- Table located on [master]
ON SU.[name]  = SP.[name] 
AND SP.[type] = 'S';

Even when we have applied the case rules we explained before, we have another kind of error when databases use different collation:



To solve this, you must apply the same collation to both sides of the join, using COLLATE DATABASE_DEFAULT as you can see here:



One important thing, you only have to do this change where you use multi-database joins, if your joins are in the same database, this is not necessary.


Now you have the knowledge to build more robust T-SQL queries that can be deployed to a wide set of different environments and configurations and you have more probabilities they will not fail.