Tuesday, May 7, 2019

Embed Azure Data Studio Notebooks in your website

Notebooks are a functionality available in Azure Data Studio, that allows you to create and share documents that may contain text, code, images, and query results. These documents are helpful to be able to share database insights and create runbooks that you can share easily.

Are you new to notebooks? don't know what are the uses for it? want to know how to create your first notebook? then you can get started in ADS notebooks checking my article for MSSQLTips.com here.

Once you have created your first notebooks and share them among your team, maybe you want to share it on your website or blog for public access.
even when you can share the file for download, you can also embed it on the HTML code.

On this post, I will show you how to do it.

What do you need?


We will use an online HTML converter, nbviewer, provided by Jypiter website, on that homepage, you just have to provide the link of your .ipynb file (my GitHub Notebook repository for this example).

It looks something like this:


After clicking the Go! button, the next window will show you the notebook rendered:


At this point, you could share this link on your site and the user can click on it to see your notebook contents, but what if you want to show the results directly on your website?

Embedding it into your website


You can use the IFrame HTML tag (reference here), with this tag, you can embed an external URL in your HTML code (just be aware of security risks of embedding external code in your application).
the code should look like this:


<iframe
width="600px" height="800px" 
src="your nbviewer URL" >
</iframe>

The final result is this:



Not the best possible way ¯\_(ツ)_/¯, but it is something.

Going further

If you want to fine-tune the above results or host them on your own website, you can check nbviewer GitHub repository so you can use the code locally.

Thursday, April 11, 2019

Creating logins and users in Azure Database

Azure Database is the PaaS solution for SQL Server databases, on a previous post we have discussed how to create one.

On this post, I want to show you how you can secure your Azure SQL Database by creating users and segregating their permissions.

When you connect to your Azure Database using SSMS (or another tool), you can see the management options are very limited compared to an On-Premises instance.




If you want to create a login and database user, you must create them via T-SQL, on this post I will show you how to do it.

Types of logins


Azure SQL database support two types of logins: SQL Server login and Azure Active directory login.

In order to create Azure AD logins, you must set up an AD administrator first using the Azure portal, you configure it on the server dashboard, then accessing the Active Directory Admin, as follows:



Once you set up your AD Admin, you can connect to the Azure database using this account and you can then assign proper access to other AD accounts.

Creating logins and users


As we told you before, you must create users and assign permissions using T-SQL, the basic script for creating users will be as follows (this is for SQL logins):

Note that as PaaS, you connect only to one database, so, the USE <database> command is not supported on Azure databases, so you must run the T-SQL script connecting to the required database manually:

Script #1 for creating the login

/******* Run This code on the MASTER Database ******/

-- Create login,

CREATE LOGIN az_read
 WITH PASSWORD = 'YourStrongP@ssW0rd' 
GO


-- Create user in master database (so the user can connect using ssms or ADS)
CREATE USER az_read
 FOR LOGIN az_read
 WITH DEFAULT_SCHEMA = dbo
GO

Script #2 - For the user database you want to provide access:

/******* Run This code on the Database you want to give the access ******/

-- The user database where you want to give the access

CREATE USER az_read
 FOR LOGIN az_read
 WITH DEFAULT_SCHEMA = dbo
GO

-- Add user to the database roles you want
EXEC sp_addrolemember N'db_datareader', N'sqlreadusr'
GO

Explaining it:

You first need to create the login, and set up your password, following the Azure strong password requirements.
Then, if the user is planning to connect to the instance using SSMS or ADS or another tool where the default database to connect is not required,  you must create the user in the master database (without roles, unless required specific access).
Next step is to create the user on the database you want to provide the access.
Finally, you assign the roles you want for that particular user.

After that you can connect with the user to provide the respective access:



For creating logins from Azure Active Directory, the script changes a little, you must create the login connecting to the database using another AD account (the administrator we configurated earlier or another AD user with enough privileges), then you specify the AD account followed by FROM EXTERNAL PROVIDER.

Once you are connected you must only change the first script, as follows:


/******* Run This code on the MASTER Database ******/

-- Create login,

CREATE USER [epivaral@galileo.edu] FROM EXTERNAL PROVIDER;
GO

-- Create user in master database (so the user can connect using ssms or ADS)
CREATE USER [epivaral@galileo.edu]
 FOR LOGIN [epivaral@galileo.edu]
 WITH DEFAULT_SCHEMA = dbo
GO

There is no change in script 2 to provide access to the user on a specific database.

Contained User


For more secure environments, you can create contained database users, this approach provide you a more portable database, with no need to worry about the logins, this is the recommended way to grant users access to Azure databases.

In order to create a contained user, just connect to the database you want to provide the access, and run the create user script as follows:


/**** Run the script on the Azure database you want to grant access *****/

CREATE USER contained_user 
WITH PASSWORD = 'YourStrongP@ssW0rd';

GO

After creating the contained user, you can use it by specifying the database you want to connect in the connection options (for this example using Azure Data Studio):


You can see in the object explorer, we only have access to the database we connected, improving security and portability:



You can read more about this in the Microsoft official documentation here.

Wednesday, March 13, 2019

Quick tip: Zoom in Azure Data Studio

If you finally have given a try to Azure Data Studio, and if you use it on a regular basis, maybe you want to customize it to suit your needs.

Among the huge customization options it has, you can control the text size in form of zoom. To do change it, just use the following keyboard combinations:

  • (Ctrl + = ) For Zoom in.
  • (Ctrl + - ) For Zoom out.
  • (Ctrl + 0) For Zoom reset.
 You can see it in action:


Also, as with everything on this tool, you can access this functionality from command palette (Ctrl + Shift + P), and type "zoom" (you can fine-control the editor zoom from here):


If you haven't tried it yet, you can download Azure Data Studio here.

Saturday, February 16, 2019

SQL Saturday 828 - T-SQL Basics: Coding for performance

A great experience!
Thanks to all the atendess to my session about T-SQL, for being my first time as a speaker for a SQL Saturday it was good!

As I promised, the presentation and session material is available at the following links:

SQLSaturday #828 site:
(Please evaluate my session if you attend)

https://www.sqlsaturday.com/828/Sessions/Details.aspx?sid=87912


My personal GitHub:

https://github.com/Epivaral/Scripts/tree/master/T-SQL%20Basics%20coding%20for%20performance

Some pictures from the event:



SQL Server local users group board!




Monday, February 11, 2019

Quick tip: Speeding up deletes from SSIS execution log

If you have SQL Server Integration Services installed on your server, and you left the default configurations a table named sysssislog is created on MSDB database, it contains logging entries for packages executed on that instance.

If you are not careful enough, this table can grow uncontrollably over time and can make subsequent insertions very slow.

A proper deletion process must be put in place, so you not get into situations like this one in your msdb database:



If you are already on this situation, you can the following T-SQL Script to delete records by batches:


DECLARE @date_del datetime,
  @batch_size int = 1000, -- will delete on batches of 1000 records
  @RowsAffected int =1

-- Time to keep in the history, in our case 1 month
SET @date_del= DATEADD(mm,-1,getdate()); 

SET NOCOUNT ON;

WHILE (@RowsAffected >0)
BEGIN
 DELETE TOP(@batch_size) 
 FROM [dbo].[sysssislog]
 WHERE starttime < @date_del;

 SET @RowsAffected = @@ROWCOUNT;

 -- If you want to know rows affected, uncomment this:
 -- PRINT @RowsAffected;
END

SET NOCOUNT OFF;


After that you can implement the same query to your msdb maintenance job to have all in one place.

Tuesday, February 5, 2019

I am speaking at SQLSaturday Guatemala 2019




I’m very thrilled to announce that I will be participating as speaker in this year’s SQL Saturday #828 event in Guatemala city!
This will be my first time as a speaker on a SQLSaturday.

Event will take place on February 16 at Universidad Francisco Marroquin, Calle Manuel F. Ayau (6 Calle final), zona 10, Guatemala

Here are the details of the session I will be presenting (at 3:15 PM CST in Dev Room)

T-SQL Basics: Coding for performance


It is very common in the IT field for a developer to switch to database developer or administrator, even when the programming concepts are the same, the skillset required to code T-SQL is different.
In this session, we will learn some basic tips to improve our code and improve database performance from early application planning stages to already deployed applications.

We will also see some demos about:
  • Compatibility level and deprecated features
  • Filtering basics: SARGABLE arguments
  • Covering indexes
  • Indexed views
  • Implicit conversions
  • Memory Grants
  • Joining records with NULL 
  • DMOs to find top resource intensive queries
  • Collation: considerations when working with multiple databases

I will show you execution plans using an excellent tool called Plan Explorer from SentryOne, best thing is that is free.
You can download it from here


As any SQL Saturday event organized by SQLPass you can register for free, it takes less than 5 minutes to get in and sign up.:

https://www.sqlsaturday.com/828/registernow.aspx


Hoping to see you there!

Wednesday, January 30, 2019

Understanding and working with NULL in SQL Server

Graphic representation of the difference between 0 and NULL
Image taken from 9gag.com
According to database theory, a good RDBMS must implement a marker to indicate "Missing or inapplicable information".

SQL Server implements the lack of value with NULL, that is datatype independent and indicates missing value, so the logical validations are different, this is better known as Three-Valued Logic, where any predicate can evaluate to True, False or Unknown.

I see a common error, referring to null like "Null values" but the correct definition of null is "Lack of value" so you must refer to it as null, in singular form.

On this post, we will learn how to work with null in SQL Server.

Declaring and assigning NULL


For working with null, SQL Server engine uses the reserved word NULL to refer to it.
It is datatype independent so you just have to assign it to any variable or field, using the equal operator =, as you can see on this example:

DECLARE @NVi as int = NULL;
DECLARE @NVc as nvarchar(30) = NULL;
DECLARE @NVv as sql_variant = NULL;

SELECT @NVi, @NVc, @NVv;

If we run the statement, we will obtain these results, the same for each data type, as expected.



For inserting and updating fields with NULL we do it like on this example:


-- For inserting values
INSERT INTO test1..Table_1 (column_char,column2)
VALUES(NULL, NULL);

-- For updating values
UPDATE test1..Table_1
SET column2 = NULL;

Be careful when working with null, the equal operator = is only used for assignment.

Operations and comparison against NULL


As we stated earlier, any predicate or comparison can evaluate to TRUE, FALSE or UNKNOWN, so when a value is unknown we don't know if it is true or false, so comparing or working any value with unknown is also unknown.

For example, the following operations result is NULL in all cases:

--Arithmetic operations
SELECT NULL+5,NULL-3.47, NULL*3.1416, NULL/0; 

SELECT SQRT(NULL), POWER(NULL,2);

--String operations
SELECT 'HELLO ' +NULL + 'WORLD';

SELECT QUOTENAME(NULL);

SELECT LTRIM(NULL);

--Date operations

SELECT DATEADD(m,1,NULL);

SELECT DATEDIFF(m,GETDATE(),NULL);

When comparing to null, we also obtain null as well, as in those examples, as you can see, even comparing null to null is unknown, and when we execute below code, we obtain NO for all:


--comparing to 0
IF(0= NULL) OR (0 <> NULL)
 SELECT 'YES'
ELSE
 SELECT 'NO' 

--Comparing to empty string ''
IF(''= NULL) OR (''<> NULL)
 SELECT 'YES'
ELSE
 SELECT 'NO' 

--Even comparing to another null
IF(NULL= NULL) OR (NULL<> NULL)
 SELECT 'YES'
ELSE
 SELECT 'NO' 

So, if we want to compare value or column and check if is null or not what must we do?
SQL Server implements the IS NULL and IS NOT  NULL to compare against null, usage is as follows:


-- IS NULL usage

SELECT *
FROM test1..Table_1
WHERE column2 IS NULL;


-- IS NOT NULL usage

SELECT *
FROM test1..Table_1
WHERE column_char IS NOT NULL;


-- Using on IF construct

DECLARE @NVi as int = NULL;

IF(@NVi IS NULL)
 SELECT 'YES'
ELSE
 SELECT 'NO';


-- For Replacing NULL you can use
-- ISNULL Value since SQL 2008

SELECT ISNULL(@NVi,0);


With these tools we are ready to work with null in our databases, so now you should follow some considerations to not impact your database performance.

Special considerations for good performance


As the last point, I would like to give you some tips for dealing with NULL

Prefer IS NULL over ISNULL()


When possible, try to compare predicates using IS NULL, before casting NULL to default values using ISNULL(), because casted values are not SARGABLE.

Take as an example of these two queries, they are equivalent, but the first one has better performance over the second:


-- First query uses an index seek :)
SELECT FirstName
      ,MiddleName
      ,LastName     
FROM Person.Person
WHERE MiddleName = N'' 
 OR MiddleName IS NULL;

-- Second query uses an index scan :(
SELECT FirstName
      ,MiddleName
      ,LastName     
FROM Person.Person
WHERE ISNULL(MiddleName,N'')=N'';

These are the execution plans:

First query execution plan, an index seek is used :)

Second query execution plan, an index scan is used :(

We get a warning on the second execution plan
You can see the 2 differences on the 2 plans, so for this case, we prefer to stick to the first one, even if you must write more code.

Be careful with aggregations over nonexistent data


When you perform aggregations, be extra careful with no existent data, even when columns do not allow null, aggregate data that does not exist on the table returns null, contrary to what one could think can be the usual (a 0 value), as you can see on this example:


-- Even when TotalDue field does not allow NULL, 
-- the SUM of noexistent values is NUll, not 0 as one could think

SELECT SUM(TotalDue) as [Total Due]
FROM Sales.SalesOrderHeader
WHERE DueDate > GETDATE();

And the query results:


For those cases, you should use the ISNULL() function after SUM.

As I always recommend: test anything before going live, and use default values and not null columns when possible, to make your life easier.