Sunday, 1 November 2015

Import Excel data using SSIS packages - SQL Server by serverku

I was working on data import using different different methods and one of the method i want to share here. This method is SSIS package to import data from Excel file.

I have created a SSIS package and captured screen shots for the steps, you can find below.
















I will share more methods to data import in SQL Server database. Enjoy data importing!

Sunday, 25 October 2015

Use statement is not supported to switch between databases, Use a new connection to connect to different database - SQL Azure Error by serverku

A long ago, when I was worked with SQL Azure database, I received below error.

Use statement is not supported to switch between databases. Use a new connection to connect to different database.

It is due to, I was directly trying to make a different database connection using USE statement in query analyzer where MASTER database connection already made. See below screen shot of error.


After I changed a database connection as below and it was succeeded.



Let's share experience of SQL Azure.

Sunday, 11 October 2015

SQL Server Data Collector for DBA Monitoring and Reporting - Powershell by serverku

Recently I have worked with Powershell script to collect required data from specified databases and specified servers, which used for reporting and monitoring purpose in SQL Server. Let me share a script here.

How to use?
1. Create a DBARepository database and following tables there.
CREATE DATABASE [DBARepository]
GO

USE [DBARepository]
GO

CREATE TABLE [dbo].[SQLServerInstances](
[InstanceID] [smallint] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](100) NULL,
[DBName] [varchar](100) NULL,
[IsActive] [bit] NULL,
[IsActive] [nvarchar](200) NULL,
[Password] [nvarchar](200) NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[DataCollectors](
[DCID] [smallint] IDENTITY(1,1) NOT NULL,
[DCName] [varchar](100) NULL,
[DCScript] [varchar](4000) NULL,
[PostExecScript] [varchar](4000) NULL,
[IsActive] [bit] NULL,
[Schedule] [varchar](30) NULL,
[Occurrence] [smallint] NULL,
[StartDate] [smalldatetime] NULL,
[LastRun] [smalldatetime] NULL,
[NextRun] [smalldatetime] NULL,
[InstanceIDs] [varchar](20) NULL,
[DCTable] [varchar](50) NULL
) ON [PRIMARY]

GO


CREATE TABLE [dbo].[DBAErrorLog](
[CounterName] [varchar](200) NULL,
[ServerName] [varchar](100) NULL,
[ErrorMessage] [varchar](4000) NULL,
[ErrorDate] [datetime] NULL,
[IsSolved] [bit] NULL
) ON [PRIMARY]

GO

[dbo].[SQLServerInstances] - Includes Servers and databases for which you want collect the data
[dbo].[DataCollectors] - Includes the queries with particular schedule which collect the data from specified servers and databases, Need to pass Instance Ids comma separated
[dbo].[DBAErrorLog] - Log every errors occurred in the script for easy debug

2. Create following PS1 file and schedule in SQL Server Agent or Windows Task.
 
<#
File : DataCollector.ps1
Created By : Paresh Prajapati
Created date : 10/10/2015
Purpose : Collect a data from specified databases of specified servers using sql queries
#>

Try
{

$Counter = 'DataCollectors - '
$DCInstance = '.'
$DCDatabase = 'dbarepository'
$DCQuery = "SELECT DCName, DCScript,PostExecScript,InstanceIDs,DCTable,Schedule,Occurrence FROM [dbo].[DataCollectors] WHERE IsActive = 1 AND InstanceIDs IS NOT NULL AND StartDate <= GETDATE() AND ISNULL(NextRun,0) <= GETDATE()"
$dccn=new-object System.Data.SqlClient.SQLConnection
#$dcconstring = "server=$DCInstance;database=$DCDatabase;user id=uuu;password=ppp"
$dcconstring = "Server=$DCInstance;Database=$DCDatabase;Integrated Security=sspi"

#$DCScripts = invoke-sqlcmd -ServerInstance $DCInstance -Database $DCDatabase -Username uuu -Password ppp -Query $DCQuery
$DCScripts = invoke-sqlcmd -ServerInstance $DCInstance -Database $DCDatabase -Query $DCQuery
$bulkCopy = new-object ("Data.SqlClient.SqlBulkCopy") $dcconstring

$dccn.ConnectionString=$dcconstring
$dccn.Open()

if ($DCScripts -ne $null)
{

foreach ($DCScript in $DCScripts)
{
Try
{

$dt = new-object "System.Data.DataTable"

$InstanceIDs = $DCScript.InstanceIDs.ToString()
$DCName = $DCScript.DCName.ToString()
$Script = $DCScript.DCScript.ToString()
$PostExecScript = $DCScript.PostExecScript.ToString()
$DCTable = $DCScript.DCTable.ToString()
$Schedule = $DCScript.Schedule.ToString()
$Occurrence = $DCScript.Occurrence.ToString()


IF ($InstanceIDs -eq "-1")
{
$Query = "SELECT [ServerName],[DBName],[UserName],[Password] FROM [SQLServerInstances] WHERE [IsActive] = 1 AND [InstanceID] = -1"
}
ELSEIF ( $InstanceIDs -eq "0")
{
$Query = "SELECT [ServerName],[DBName],[UserName],[Password] FROM [SQLServerInstances] WHERE [IsActive] = 1 AND [InstanceID] > 0"
}
ELSE
{
$Query = "SELECT [ServerName],[DBName],[UserName],[Password] FROM [SQLServerInstances] WHERE [IsActive] = 1 AND [InstanceID] > 0 AND [InstanceID] IN (" + $InstanceIDs + ")"
}

$Instances = invoke-sqlcmd -ServerInstance $DCInstance -Database $DCDatabase -Query $Query

foreach ($Instance in $Instances)
{
Try
{
$Server = $Instance.ServerName.ToString()
$DBName = $Instance.DBName.ToString()
$UserName = $Instance.UserName.ToString()
$Password = $Instance.Password.ToString()


$cn = new-object System.Data.SqlClient.SqlConnection "server=$Server;database=$DBName;Integrated Security=sspi"
$cn.Open()
$DCSql = $cn.CreateCommand()
$DCSql.CommandText = $Script
$rdr = $DCSql.ExecuteReader()
$dt.Load($rdr)
$rdr.Close()
$rdr.Dispose()
$cn.close()
}
Catch
{
# Write-Host "Error Occured - Instance"

if ($cn.State -eq "Open")
{
$cn.close()
}

$ErrorMessage = "Line: " + $_.InvocationInfo.ScriptLineNumber + " - Message: " + $_.Exception.Message
$ErrorMessage = $ErrorMessage.replace("'","''")
$CounterName = $Counter + $DCName
$ErrorLogScript = "INSERT INTO DBAErrorLog (CounterName, ServerName, ErrorMessage, ErrorDate) SELECT '"+$CounterName+"', '"+$Server+"', '"+$ErrorMessage+"', GETDATE()"
$ErrorLogSQL = $dccn.CreateCommand()
$ErrorLogSQL.CommandText = $ErrorLogScript
$ErrorLogSQL.ExecuteNonQuery()

}
}

$bulkCopy.DestinationTableName = $DCTable
$bulkCopy.BatchSize = 100
$bulkCopy.BulkCopyTimeout = 100
$bulkCopy.WriteToServer($dt)
$dt.clear()
$dt.Dispose()


IF ($PostExecScript -ne $null -and $PostExecScript -ne "")
{
$PostDCSQL = $dccn.CreateCommand()
$PostDCSQL.CommandText = $PostExecScript
$PostDCSQL.ExecuteNonQuery()
}

$NextRunUpdateScript = "UPDATE [dbo].[DataCollectors] SET [LastRun] = GETDATE(), [NextRun] = DATEADD("+$Schedule+","+$Occurrence+",ISNULL([NextRun],[StartDate])) WHERE DCName = "+ "'" + $DCName + "'"
$NextRunSQL = $dccn.CreateCommand()
$NextRunSQL.CommandText = $NextRunUpdateScript
$NextRunSQL.ExecuteNonQuery()
}
Catch
{
# Write-Host "Error Occured - Script"

$ErrorMessage = "Line: " + $_.InvocationInfo.ScriptLineNumber + " - Message: " + $_.Exception.Message
$ErrorMessage = $ErrorMessage.replace("'","''")
$CounterName = $Counter + $DCName
$ErrorLogScript = "INSERT INTO DBAErrorLog (CounterName, ServerName, ErrorMessage, ErrorDate) SELECT '"+$CounterName+"', '"+$Server+"', '"+$ErrorMessage+"', GETDATE()"
$ErrorLogSQL = $dccn.CreateCommand()
$ErrorLogSQL.CommandText = $ErrorLogScript
$ErrorLogSQL.ExecuteNonQuery()

$NextRunUpdateScript = "UPDATE [dbo].[DataCollectors] SET [LastRun] = GETDATE(), [NextRun] = DATEADD("+$Schedule+","+$Occurrence+",ISNULL([NextRun],[StartDate])) WHERE DCName = "+ "'" + $DCName + "'"
$NextRunSQL = $dccn.CreateCommand()
$NextRunSQL.CommandText = $NextRunUpdateScript
$NextRunSQL.ExecuteNonQuery()

}
}
}
}
Catch
{
# Write-Host "Error Occured - Main"

$ErrorMessage = "Line: " + $_.InvocationInfo.ScriptLineNumber + " - Message: " + $_.Exception.Message
$ErrorMessage = $ErrorMessage.replace("'","''")
$CounterName = $Counter + $DCName
$ErrorLogScript = "INSERT INTO DBAErrorLog (CounterName, ServerName, ErrorMessage, ErrorDate) SELECT '"+$CounterName+"', '"+$Server+"', '"+$ErrorMessage+"', GETDATE()"
$ErrorLogSQL = $dccn.CreateCommand()
$ErrorLogSQL.CommandText = $ErrorLogScript
$ErrorLogSQL.ExecuteNonQuery()

}
Finally
{

$dccn.Close()
}

3. Sample collector tables and collector queries.
-- Master Records of servers and databases
SET IDENTITY_INSERT SQLServerInstances ON

INSERT INTO [dbo].[SQLServerInstances]
(
[InstanceID],
[ServerName],
[DBName],
[IsActive]

)
SELECT 1,'Server1', 'master', 1
UNION
SELECT 2,'Server1', 'DBARepository', 1
UNION
SELECT 3,'Server2', 'master', 1

SET IDENTITY_INSERT SQLServerInstances OFF

GO

-- Collector sample table
CREATE TABLE [dbo].[DatabaseStatistics](
[ServerName] [varchar](50) NULL,
[DBName] [varchar](50) NULL,
[DBSize_MB] [decimal](18, 2) NULL,
[CreatedDate] [date] NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[TableStatistics](
[ServerName] [varchar](30) NULL,
[DBName] [varchar](50) NULL,
[TableName] [varchar](100) NULL,
[RowsTotal] [bigint] NULL,
[TotalSpace_MB] [int] NULL,
[UsedSpace_MB] [int] NULL,
[UnusedSpace_MB] [int] NULL,
[CreatedDate] [date] NULL
) ON [PRIMARY]

GO

-- Collector sample records
INSERT INTO [dbo].[DataCollectors]
(
[DCName],
[DCScript],
[PostExecScript],
[IsActive],
[Schedule],
[Occurrence],
[StartDate],
[LastRun],
[NextRun],
[InstanceIDs],
[DCTable]
)
SELECT
'Database Statistics',
'SELECT
@@Servername AS ServerName,
db.[name] AS DBName,
Sum(( Cast(( ( ( Cast(af.[size] AS NUMERIC(18, 4)) * 8192 ) / 1024 ) /
1024 ) AS
NUMERIC(18, 2)) )) AS FileSize_MB,
GETDATE() AS CreatedDate
FROM sys.sysdatabases db
INNER JOIN sys.sysaltfiles af
ON db.dbid = af.dbid
GROUP BY db.[name] ',
NULL,
1,
'day',
1,
GETDATE(),
NULL,
NULL,
'1,3',
'DatabaseStatistics'

UNION

SELECT
'Table Statistics',
'SELECT
@@Servername AS ServerName,
DB_NAME() AS DatabaseName,
Schema_name(t.schema_id) + ''.'' + t.name AS TableName,
p.rows AS Rows,
(Sum(a.total_pages) * 8)/1024 AS TotalSpace_MB,
(Sum(a.used_pages) * 8)/1024 AS UsedSpace_MB,
((Sum(a.total_pages) - Sum(a.used_pages)) * 8)/1024 AS UnusedSpace_MB ,
GETDATE() AS CreatedDate
FROM sys.tables t
INNER JOIN sys.indexes i
ON t.object_id = i.object_id
INNER JOIN sys.partitions p
ON i.object_id = p.object_id
AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a
ON p.partition_id = a.container_id
GROUP BY Schema_name(t.schema_id) + ''.'' + t.name,
p.rows',
NULL,
1,
'hour',
12,
GETDATE(),
NULL,
NULL,
'2',
'TableStatistics'

GO

Note: Data collector script and table must have same number of columns

  •  [DCName]: Name of data collector
  •  [DCScript] : Data collector Script
  •  [PostExecScript] : Additional script which can be performed on collected data
  •  [IsActive]: Flag to data collector script to run or not
  •  [Schedule]: When script will run, It has possible values are Year, Month, Week, Day, Hour, Minute, Second
  •  [Occurrence] : When it will occur or frequency
  •  [StartDate] : Start date and time of collector counter
  •  [LastRun] : When it last ran, It will update by PS1 script
  •  [NextRun]: Next schedule to run data collector, It will be calculated by PS1 script
  •  [InstanceIDs]: Comma separated InstancesIds where script will be executed to collect data
  •  [DCTable]: Name of data collector table to collect output data of query
You can add more collector tables and queries with particular schedule which you want to capture. This script I have created for need for monitoring and reporting. Still, I am enhancing it, you can also share your ideas for its enhancement. Enjoy Data Collector!

Sunday, 27 September 2015

Take care while using Function or Sub Query in Select List - SQL Server by serverku

At the time of development, we are just looking for it to be functionally work. For easy development, we are applying any code sometimes. But we never look for the performance issue at the time of the development. We look forward to the performance on the second stage. At this stage we find the issue and resolve it.

I want you to go through on one example where I have used one function and sub query in the select list to get the data in the script. Some time inner join is best instead of function and sub query, But it totally depends on the data and the script which you write.

How can we review?
We have a script to check all the stuffs here,
 
-- Creating tables
IF (OBJECT_ID('ItemTypes','U') > 0 )
DROP TABLE ItemTypes
GO

IF (OBJECT_ID('ItemDetails','U') > 0 )
DROP TABLE ItemDetails
GO

CREATE TABLE ItemTypes
(
ItemType varchar(100),
ItemTypeDesc varchar(100)
)
GO

CREATE TABLE ItemDetails
(
ItemId int not null,
ItemName varchar(100),
ItemType varchar(10),
ItemDescription varchar(200)
)
GO

-- Inserting records
INSERT INTO ItemDetails
SELECT
a.id,
a.name,
a.xtype,
'ItemDesc'
FROM sys.sysobjects a
CROSS JOIN sys.sysobjects b
GO

INSERT INTO ItemTypes
SELECT distinct
type,
type_desc
FROM sys.objects
GO

-- Creating function which will be used in first script below
CREATE FUNCTION dbo.fn_GetItemTypeDesc
(
@ItemType varchar(10)
)
RETURNS varchar(100)
AS
BEGIN
DECLARE @ItemTypeDesc varchar(100)

SELECT
@ItemTypeDesc = ItemTypeDesc
FROM ItemTypes
WHERE ItemType = @ItemType

RETURN @ItemTypeDesc

End
GO

Here we will review all below scripts which using function, subquery, and inner join accordingly and review the elapsed time and you for all.
 
SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT
id.ItemId,
id.ItemName,
id.ItemType,
dbo.fn_GetItemTypeDesc(id.ItemType) as ItemTypeDesc,
id.ItemDescription
FROM ItemDetails id

SET STATISTICS IO OFF
SET STATISTICS TIME OFF


SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT
id.ItemId,
id.ItemName,
id.ItemType,
(SELECT it.ItemTypeDesc FROM ItemTypes it WHERE it.ItemType = id.ItemType) as ItemTypeDesc,
id.ItemDescription
FROM ItemDetails id

SET STATISTICS IO OFF
SET STATISTICS TIME OFF


SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT
id.ItemId,
id.ItemName,
id.ItemType,
it.ItemTypeDesc as ItemTypeDesc,
id.ItemDescription
FROM ItemDetails id
INNER JOIN ItemTypes it
ON (it.ItemType = id.ItemType)

SET STATISTICS IO OFF
SET STATISTICS TIME OFF
Here are screen shots of output of each respectively.




(Click on images to enlarge)

I hope you have a better idea what are best for the execution of the query. Please note, review the execution plan before applying any change for the optimization.

Sunday, 20 September 2015

Impact of Nonclustered Index without Clustered Index - SQL Server by serverku

We have read and learn so many times from online sources as it is not best practice to create a non-clustered index without any clustered index created on the table. We should have clustered index on the table.

Have you practically seen that best practice? What will be the impact on query when table have a non-clustered index but not clustered index?. Without a clustered index on the table the query execution plan sometime used non-clustered index and sometimes not as depends on where condition used in the query.

I will show you here, how it behaves with and without clustered index. We have a two shot of the show and will see this behavior with normal non-clustered index and additional covering non-clustered index. Let we created a sample table and get insert some records in it.
 
--- Creating tables
IF (OBJECT_ID('ItemDetails','U') > 0 )
DROP TABLE ItemDetails
GO

CREATE TABLE ItemDetails
(
ItemAutiId int identity(1,1),
ItemId int not null,
ItemName varchar(100),
ItemType varchar(10),
ItemDescription varchar(200)
)
GO

-- Inserting sample records
INSERT INTO ItemDetails
SELECT
a.id,
a.name,
a.xtype,
'ItemDesc'
FROM sys.sysobjects a
CROSS JOIN sys.sysobjects b
GO
As discussed, we will go through the following, Will review execution plan for each and compare.

1. Normal non-clustered index behavior without and with clustered index on table.
-- Creating nonclustered index
CREATE NONCLUSTERED INDEX [IX_ItemType]
ON [dbo].[ItemDetails] ([ItemType])
GO

-- Running this query with Include Actual execution plan.
SELECT
id.ItemAutiId,
id.ItemId,
id.ItemName,
id.ItemType,
id.ItemType,
id.ItemDescription
FROM ItemDetails id
WHERE id.ItemType = 'PK'
GO

 
--Now creating a clustered index on table
CREATE CLUSTERED INDEX [IX_ItemAutiId]
ON [dbo].[ItemDetails] ([ItemAutiId])
GO

-- Again running this query with Include Actual execution plan after creating clustered index.
SELECT
id.ItemAutiId,
id.ItemId,
id.ItemName,
id.ItemType,
id.ItemType,
id.ItemDescription
FROM ItemDetails id
WHERE id.ItemType = 'PK'
GO

You can see here first case did table scan and second case did clustered index scan. Now moving other shots,

2. Covering nonclustered index behavior without and with clustered index on table.
-- Dropping existing indexes on table
DROP INDEX [IX_ItemType] ON [ItemDetails]
DROP INDEX [IX_ItemAutiId] ON [ItemDetails]

GO

-- Creating covering index on table
CREATE NONCLUSTERED INDEX [IX_ItemType]
ON [dbo].[ItemDetails] ([ItemType])
INCLUDE ([ItemId],[ItemName],[ItemDescription])
GO

-- Running this query with Include Actual execution plan.
SELECT
id.ItemAutiId,
id.ItemId,
id.ItemName,
id.ItemType,
id.ItemType,
id.ItemDescription
FROM ItemDetails id
WHERE id.ItemType = 'PK'
GO

-- Creating clustered index on table
CREATE CLUSTERED INDEX [IX_ItemAutiId]
ON [dbo].[ItemDetails] ([ItemAutiId])
GO

-- Again running this query with Include Actual execution plan after creating clustred index.
SELECT
id.ItemAutiId,
id.ItemId,
id.ItemName,
id.ItemType,
id.ItemType,
id.ItemDescription
FROM ItemDetails id
WHERE id.ItemType = 'PK'
GO


In this scenario first case did table scan and second case did non-clustered index seek. Do you have any more idea about it if you experienced with it?

Thursday, 17 September 2015

Enhanced Mitigation Experience Toolkit 5.2 by serverku



Enhanced Mitigation Experience Toolkit 5.2


I recently came across this utility from microsoft that helps to stop malwares on systems. As currently working on Security Operation Center (SOC) , its one tool to test out and see how it really works 

the tool can be downloaded from Microsoft website below.

https://www.microsoft.com/en-us/download/details.aspx?id=46366

Will post feedback if i get time.

thanks

Sunday, 13 September 2015

Contained Database example in SQL Server by serverku

SQL Server 2012 introduced a new feature named "Contained Database" which include all database settings and metadata along with database backup. It allows us to move database backups along with its users to another server, so no need any dependencies on the server. Let us look an example.

1. Enable a setting
 
-- Enable setting for contained database
Use master
GO
exec sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE
GO
exec sp_configure 'contained database authentication', 1;
RECONFIGURE WITH OVERRIDE
GO
2. Enable a feature for a database
 
-- Create a new contained database

CREATE DATABASE [ContainedDB]
CONTAINMENT = PARTIAL
ON PRIMARY
( NAME = N'ContainedDB_Data', FILENAME = N'D:\Data\ContainedDB_Data.mdf')
LOG ON
( NAME = N'ContainedDB_Log', FILENAME = N'D:\Log\ContainedDB_Log.ldf')
GO

-- Or change existing database to contained
ALTER DATABASE [ContainedDB] SET CONTAINMENT = PARTIAL
GO
3. Create a new SQL User
 
-- Create a new SQL User in this contained database
USE [ContainedDB]
GO
CREATE USER ContainedDBUser WITH PASSWORD = 'ContainedDBUser'
GO
Checking : Let us login with this User created and below error occur .


 Now change a setting when login with this User.

1. Go to 'Options<<'
2. Move on tab 'Connection  Properties'
3. Set 'Connect to database' to "ContainedDB"


After applying an option, you will be succeed. So wherever you want to move contained database to another server, just move it and log in as options stated above. Enjoy Contained Database!

Sunday, 6 September 2015

DataType can matter in where condition - a worst case scenario by serverku

We should use proper filter in where conditions as per data type. Like if the data type of filtered column is varchar then use filtered variable or value varchar, if filtered column is an integer then filtered variable/value should be an integer.

Worst case,
 
-- #1.
-- If OrderId column is VARCHR, this cause table scan
-- because here OrderId column all values convert from varchar to int
-- even if index created on it
SELECT *
FROM OrderDetails
WHERE OrderId = 123
Best case,
 
-- #1.
-- If OrderId column is INT, this will work fine
SELECT *
FROM OrderDetails
WHERE OrderId = 123

-- #2.
-- If OrderId column is INT, this will also work fine
-- because here value '123' convert from varchar to int
SELECT * FROM OrderDetails
WHERE OrderId = '123'
Create a table with sample records with different data types as mentioned in above queries and check the execution plan, you may see the difference for both of them.

Sunday, 30 August 2015

Table scan or Unexpected output due to improper where condition applied for DateTime filter by serverku

When you are using date filter with queries, like to find the records from tables for the particular year/month, we are approaching some extra data type conversion on fields. Below are some examples.

Should not use,
 
-- #1. : This cause to table scan
SELECT *
FROM OrderDetails
WHERE datepart(month,Orderdate) =?
AND datepart(year,Orderdate) = ?

-- #2. : This cause to unexpected output
SELECT *
FROM OrderDetails
WHERE convert(varchar,OrderDate,112) between
'20110101' and '20110131'
Should use,
 
-- #1.
SELECT *
FROM OrderDetails
WHERE convert(varchar,Orderdate,112) >= '20110101'
AND convert(varchar,Orderdate,112) < '20110201' -- EndDate + 1

-- #2.
SELECT *
FROM OrderDetails
WHERE cast (OrderDate as datetime) BETWEEN
'2011-01-01 00:00:00.000' AND '2011-01-31 23:59:59.900'
Create table with sample records or use existing table which has datetime data type field, Run query above and check the results with execution plan. Hope you like it.

Sunday, 23 August 2015

COUNT(*) with a LEFT JOIN may produce unexpected results - SQL Server by serverku

We should not use count (*) for left outer join two or more tables with group by. Instead we can use right table's column for count, otherwise it will come up with unexpected output.

Way cause for unexpected output
 
SELECT
a.id,
COUNT(*)
FROM table1 a
LEFT JOIN table2 b ON (a.id=b.id)
GROUP BY a.id
GO
Right way
 
SELECT
a.id,
COUNT(b.id)
FROM table1 a
LEFT JOIN table2 b ON (a.id=b.id)
GROUP BY a.id
GO
Create tables with sample records, run above queries for them and see difference. Did you know this earlier?

Sunday, 16 August 2015

Trace the query with specific session in SQL Server Profiler by serverku

One more enhancement i found with Query Editor of SQL Server 2012 and later version, we can trace the query with specific current session in SQL Server Profiler. For that just need to right click in Query Editor and you can find the option Trace Query in SQL Server Profiler or press CTRL + ALT + P, which will open SQL Profiler filtered with that current session.

Below is a screen shot for the same.


Did you know this or used ever?

Sunday, 9 August 2015

Linked Server and OPENQUERY with Execl Source - SQL Server by serverku


As I have written posts for the linked servers with SQL Server option in previous post, the same way we have another option is available and these are with other data sources as a linked server. With SQL Server as a linked server, we can communicate with two SQL servers. But we can also communicate other data sources like excel, csv and others as well. Let us link our SQL server with Excel and get the data from excel. For that we need to go through the same way as I did for SQL Server linked server, but here we need to choose option of other data sources.


How can we do it using TSQL?
 
USE [master]
GO

EXEC [master].[dbo].[sp_addlinkedserver]
@server='ExcelImport',
@srvproduct='Excel',
@provider='Microsoft.Jet.OLEDB.4.0',
@datasrc='D:\Import\Import_1.xls',
@provstr= 'Excel 8.0'

GO

EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'collation compatible', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'data access', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'dist', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'pub', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'rpc', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'rpc out', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'sub', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'connect timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'collation name', @optvalue=null
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'lazy schema validation', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'query timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'use remote collation', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'EXCELIMPORT', @optname=N'remote proc transaction promotion', @optvalue=N'true'
GO
USE [master]
GO
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'EXCELIMPORT', @locallogin = NULL , @useself = N'False'
GO
It has other data source also like as following,

Now, after creating excel as linked server, we have time to communicate with it and fetch data through linked server & OPENQUERY
 
SELECT
*
FROM ExcelImport...[Sheet1$]

-- OR --

SELECT
*
FROM OPENQUERY(ExcelImport, 'SELECT * FROM [Sheet1$]')
Your comments are appreciated.

Sunday, 2 August 2015

Working with Linked Servers in SQL Server by serverku

We may have a need for some logic to move the data from one server database to another server database on production environment. Also for the distributed transactions or for the cross server database queries we require it.

How to do it? With Linked servers we can perform any distributed transactions between servers. Also, we can execute remote servers stored procedure with linked server communication.

How to create it?
Herewith we have small demo which I have captured during setup it.
  1. Go to the Server Objects --> Linked Servers, Right click on it and click on New Linked Servers.
  2. Specify Server name which you want to linked to current server.
  3. Go to Security tab and select the option for link.
The option details as msdn and book online are following,

Local login : Specify the local login that can connect to the linked server. The local login can be either a login using SQL Server Authentication or a Windows Authentication login. Use this list to restrict the connection to specific logins, or to allow some logins to connect as a different login.

Impersonate : Pass the username and password from the local login to the linked server. For SQL Server Authentication, a login with the exact same name and password must exist on the remote server. For Windows logins, the login must be a valid login on the linked server.

Remote User : Use the remote user to map users not defined in Local login. The Remote User must be a SQL Server Authentication login on the remote server.

Remote Password : Specify the password of the Remote User.

Not be made : Specify that a connection will not be made for logins not defined in the list.

Be made without using a security context : Specify that a connection will be made without using a security context for logins not defined in the list.

Be made using the logins current security context : Specify that a connection will be made using the current security context of the login for logins not defined in the list. If connected to the local server using Windows Authentication, your windows credentials will be used to connect to the remote server. If connected to the local server using SQL Server Authentication, login name and password will be used to connect to the remote server. In this case a login with the exact same name and password must exist on the remote server.

Be made using this security context : Specify that a connection will be made using the login and password specified in the Remote login and With password boxes for logins not defined in the list. The remote login must be a SQL Server Authentication login on the remote server.

We have a some server options like RPC, RPC out as follows


We have seen as how can we create linked server from SSMS. Now we will have script to create linked servers as well.
 USE [master]
GO

EXEC master.dbo.sp_addlinkedserver @server = N'PARESH-PC1', @srvproduct=N'SQL Server'
GO

EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'collation compatible', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'data access', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'dist', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'pub', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'rpc', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'rpc out', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'sub', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'connect timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'collation name', @optvalue=null
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'lazy schema validation', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'query timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'PARESH-PC1', @optname=N'use remote collation', @optvalue=N'true'
GO

USE [master]
GO
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'PARESH-PC1', @locallogin = NULL , @useself = N'False', @rmtuser = N'dba', @rmtpassword = N'dba@test'
GO

How to get list of linked servers?

SQL Server provide system stored procedure as well system tables , from we can have the details for the same.
  • sp_linkedservers
  • sys.servers
 
-- 1. Fetch the data using tsql

SELECT
*
FROM paresh-pc.demo.dbo.dempteable

-- 2. Running remote stored procedures

EXECUTE paresh-pc.demo.dbo.DemoSP 'test_param'
I hope you enjoyed linked servers. Please share your comments what you are performing with linked servers. 

Sunday, 26 July 2015

Missing index suggesion by Execution Plan - SQL Server by serverku

The index is a most important factor in the optimization and performance. Whenever we work for the query optimization, we are mostly looking for indexes, then check any other factors. Also scheduling the job for the maintenance of the indexes on periodical basis. From Index usage and statistics report, we can have an idea for the index optimization. But SQL Server query execution plan also recommended and give the suggestion with index hint. We will look here the demonstration for the same with execution plan.
--- Creating tables
IF (OBJECT_ID('ItemTypes','U') > 0 )
DROP TABLE ItemTypes
GO

IF (OBJECT_ID('ItemDetails','U') > 0 )
DROP TABLE ItemDetails
GO

CREATE TABLE ItemTypes
(
ItemType varchar(100),
ItemTypeDesc varchar(100)
)
GO

CREATE TABLE ItemDetails
(
ItemId int not null,
ItemName varchar(100),
ItemType varchar(10),
ItemDescription varchar(200)
)
GO


-- Inserting sample records
INSERT INTO ItemDetails
SELECT
a.id,
a.name,
a.xtype,
'ItemDesc'
FROM sys.sysobjects a
CROSS JOIN sys.sysobjects b
GO

INSERT INTO ItemTypes
SELECT distinct
type,
type_desc
FROM sys.objects
GO

-- Review the execution plan.
SELECT
id.ItemId,
id.ItemName,
id.ItemType,
it.ItemTypeDesc as ItemTypeDesc,
id.ItemDescription
FROM ItemDetails id
INNER JOIN ItemTypes it
ON (it.ItemType = id.ItemType)
GO

You can see the index hint with the execution plan, and following is a script for same.
/*
Missing Index Details from SQLQuery.sql - ReportServer (dba (56))
The Query Processor estimates that implementing the following index could improve the query cost by 89.0255%.
*/

/*
USE [ReportServer]
GO
CREATE NONCLUSTERED INDEX []
ON [dbo].[ItemDetails] ([ItemType])
INCLUDE ([ItemId],[ItemName],[ItemDescription])
GO
*/
Apply it and enjoy!

Sunday, 19 July 2015

Insert default values for all columns in table - SQL Server by serverku

I was asked by some of the job colleagues as how can insert all the DEFAULT values in table when table have all the columns defined with default property. If the table has at least one column without default and identity column, then we can do insert default with rest columns easily. How can with insert default values for all columns in the table? For that we will see a small demo where we will create one table with all columns defined by default and will see the insertion of default values for the same.
USE DEMO
GO

-- Creating table
IF (OBJECT_ID('DefaultTable','U') > 0)
DROP TABLE DefaultTable
GO

CREATE TABLE DefaultTable
(
TransactionId INT IDENTITY(1,1) NOT NULL,
TransactionName VARCHAR(100) NOT NULL DEFAULT 'Test Transaction',
TransactionDate DATETIME NOT NULL DEFAULT GETDATE(),
TransactionType SMALLINT NOT NULL DEFAULT 0
)
GO

-- Insert Default values for all columns

INSERT DefaultTable
DEFAULT VALUES
GO

-- Reviewing records in table
SELECT
*
FROM DefaultTable
GO

Let us try more with some more inserts and see again,
INSERT DefaultTable
DEFAULT VALUES
GO 100

-- Reviewing records in table
SELECT
*
FROM DefaultTable
GO

You might experience this type of need.

Sunday, 12 July 2015

Script to get Database Files detail - SQL Server by serverku

The DBA needs to perform some day to day activities to monitor SQL servers and databases as everything is OK or not. Some of the tasks are based on daily, some are on weekly and some are on a monthly. But database activities and monitoring and its details are important because databases are growing day to day and accordingly we need to check out the disk space.

For that we should have all the database details and statistics like database file current size,growth and maximum size.We can do check the databases properties and file size for each of them manually. We should have the script to check it for all or specific databases Which can ease to get the details and monitoring with same. Here I will write a simple script which will give the specific database files and information for the same. Let us drive with it, here we will create one database and add one filegroup and secondary files in that.
 
USE MASTER
GO

-- Creating database
CREATE DATABASE DatabASeFileDetails
GO

-- Adding a new filegroup to database
ALTER DATABASE DatabASeFileDetails
ADD FILEGROUP NewFileGroup;
GO

--Adding a new secondary and log files to a database to above created filegroup
ALTER DATABASE DatabASeFileDetails
ADD FILE
(
-- New secondary files added here
NAME = FileGroupDB_Data_1,
FILENAME = 'C:\DatabASeFileDetails_Data_1.ndf',
SIZE = 15 MB,
MAXSIZE = 100 MB,
FILEGROWTH = 5 MB
),
(
-- New log file added here
NAME = FileGroupDB_Log_1,
FILENAME = 'C:\DatabASeFileDetails_Log_1.ldf',
SIZE = 5 MB,
MAXSIZE = 100 MB,
FILEGROWTH = 5 MB
)
TO FILEGROUP NewFileGroup; -- Defining filegroup name here
GO
Now we have created databases and done with new filegroup and database files. We have turned to run the script which we actually want, which returns with database details.
 
-- Using that database
USE DATABASEFILEDETAILS
GO

SELECT
DB_NAME(DBID) AS DatabaseName,
Name AS LogicalFileName,
CASE
WHEN FILEID = 1
THEN 'Primary File'
WHEN FILEID = 2 THEN 'Log File'
ELSE 'Secondary File'
END AS FileDescription,
FILEGROUP_NAME(groupid) AS FileGroup,
CAST( ((CAST (SIZE AS NUMERIC(18,2) )*8)/1024) AS NUMERIC(18,2)) AS [FileSize(MB)],
CASE status
WHEN 0
THEN 'No growth'
WHEN 2
THEN CAST(CAST(((CAST (growth AS INT )*8)/1024) AS INT) AS VARCHAR(1000)) + ' [growth(MB)]'
ELSE CAST(growth AS VARCHAR(1000)) + ' [growth(%)]'
END AS Growth,
CASE maxsize
WHEN 0
THEN 'No growth'
WHEN -1
THEN 'File will grow until the disk is full'
ELSE CAST(CAST(((CAST (maxsize AS NUMERIC(18,2) )*8)/1024) AS NUMERIC(18,2)) AS VARCHAR(1000))
END AS [MaxFileSize(MB)],
FileName AS PhysicalFilePath
FROM SYS.SYSALTFILES
WHERE DBID > 4
AND DB_NAME(DBID) = 'DatabASeFileDetails'
ORDER BY DBID,FileId

GO

In the above snapshot we have compared database script results and database property, so we can have idea everything are ok and fine. What you are using? Please fill your comments if I missed something.

Sunday, 5 July 2015

Purge SQL Server Logs and Agent Error Logs by serverku

One day when I checked the disk space on C:\ drive where I have installed SQL Server, then space is full and free space is not available. I have checked all the folders and files on C:\ drive, but the problem was different. The size of SQL Server error log file goes very high and this cause to disk was full. I have reviewed and the current error log files cause for the same. I have tried to remove it to make free space on disk, but I could not delete as it was used by SQL Server. Then I have applied one script to detach the current error logs and create a new file in the error logs.

Generally, when the new error files generated when SQL Service restart. The script looks as follows, here I have given a sample example as how to detach last active ERROR logs from SQL Server, so after that we can remove those previous error log files. We have a look for the file shots of SQL Server error log files and Agent error log files as well before the script run.


 
-- Using MSDB database to run the command
USE MSDB
GO

-- Removing SQL Server error logs
EXEC SP_CYCLE_ERRORLOG
GO

-- Removing SQL Server Agent error logs
EXEC SP_CYCLE_AGENT_ERRORLOG
GO
 

Let us go, through the screen below which will show the new files created for the same. What do you think, have you applied any script or schedule to create new error log files and remove previous error log files?

Sunday, 28 June 2015

Cumulative Calculation with some methods - SQL Server by serverku

Sometime we have a need some calculation custom logic like some aggregation, pivoting and unpivoting etc. We have a same need here to do the calculation for the cumulative data from the logic. At that time we are implementing any logic which is in our mind, which are not going to find the best and easy way to do that thing and that cause the performance issue.

In a previous article I have posted for pivoting and unpivoting usage. Here I would like you to go through all the way which used to calculate cumulative data. The thing is that we can have all the ways to get it and which are the best among them, but it is totally depend our query and table data. We will look all the ways one by one and decide the best way related to logic. Before implementing it, we will first create objects required for the demo.
USE DEMO
GO

-- Creating table
IF (OBJECT_ID('CummulativeData','U') > 0)
DROP TABLE CummulativeData
GO

CREATE TABLE CummulativeData
(
TransactionId Int identity(1,1) not null,
UserAccountId int not null,
UserName varchar(500) not null,
Amount numeric(18,2),
TransactionDate datetime
)

GO

-- Inserting sample records in table
INSERT CummulativeData
(
UserAccountId,
UserName,
Amount,
TransactionDate
)
SELECT 1234, 'ABCUSER', 250.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 350.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 150.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 100.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 300.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 650.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 50.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 100.00, GETDATE()
UNION ALL
SELECT 1234, 'ABCUSER', 450.00, GETDATE()

GO

-- Reviewing data from table
SELECT
TransactionId,
UserAccountId,
UserName,
Amount,
TransactionDate
FROM CummulativeData
WHERE UserAccountId = 1234
ORDER BY TransactionId

GO

After creating objects and sample records we have a turn to test drive with all the methods.

#1 way - Using temp table with while loop :
 
-- creating temp table
CREATE TABLE #tempCummulative
(
TransactionId Int ,
UserAccountId int not null,
UserName varchar(500) not null,
Amount numeric(18,2),
TransactionDate datetime ,
CummulativeAmount numeric(18,2)
)
GO

-- variables declaration
DECLARE @cnt int, @LastAmmout numeric(18,2)
SET @cnt = 1
SET @LastAmmout = 0.00

-- While loop start
WHILE((SELECT COUNT(1) FROM CummulativeData WHERE UserAccountId = 1234) >= @cnt)
BEGIN

INSERT INTO #tempCummulative
SELECT
TransactionId,
UserAccountId,
UserName,
Amount,
TransactionDate,
@LastAmmout + Amount
FROM CummulativeData
WHERE UserAccountId = 1234
AND TransactionId = @cnt

SET @LastAmmout = (select CummulativeAmount from #tempCummulative where TransactionId = @cnt)
SET @cnt = @cnt + 1

END
-- While loop end

-- Viewing cummulative data from temp table
SELECT
*
FROM #tempCummulative
GO;

#2 way - Adding column in table and while loop :
 

-- Adding column for CummulativeAmount
ALTER TABLE CummulativeData
ADD CummulativeAmount numeric(18,2)
GO

-- declaring variables
DECLARE @cnt int, @LastAmmout numeric(18,2)
SET @cnt = 1
SET @LastAmmout = 0.00

-- while loop start
WHILE((SELECT COUNT(1) FROM CummulativeData WHERE UserAccountId = 1234) >= @cnt)
BEGIN

UPDATE CummulativeData
SET @LastAmmout = @LastAmmout + ISNULL(Amount,0.00),
CummulativeAmount = @LastAmmout
WHERE UserAccountId = 1234
AND TransactionId = @cnt

SET @cnt = @cnt + 1

END
-- while loop end


-- Viewing cumulative data from table
SELECT
*
FROM CummulativeData
GO

#3 way - Common Table Expression (CTE):
 
-- CTE
;WITH cteCummulative
AS
(

SELECT TOP 1
TransactionId,
UserAccountId,
UserName,
Amount,
TransactionDate,
Amount as CummulativeAmount
FROM CummulativeData
WHERE UserAccountId = 1234
ORDER BY TransactionId

UNION ALL

SELECT
c1.TransactionId,
c1.UserAccountId,
c1.UserName,
c1.Amount,
c1.TransactionDate,
CAST(c2.CummulativeAmount + c1.Amount AS numeric(18,2))
FROM CummulativeData c1
INNER JOIN cteCummulative c2
ON (c1.UserAccountId = c2.UserAccountId and c1.TransactionId = c2.TransactionId + 1 )

)

-- Viewing cummulative data from CTE
SELECT
*
FROM cteCummulative 

Stay tuned for more.

Sunday, 21 June 2015

PERSISTED Columns with HierarchyId Datatype - SQL Server 2008 by serverku

I think you have already read all the articles related to HierarchyId in the past And hope you understood the concept and usage of it. You also experienced with performance by using HierarchyId datatype as I have posted the performance review in earlier posts as well.

With earlier posts you got some basic methods or functions as how we get the hierarchy data levels, root node, string path. Also, some other methods like as how can we get up-line & down-line modes. Here I am going to present the same, but as Persisted column. So we do not need to write those functions in query level every time. Let us see the workaround for that.

How can ?
We need to create one table and define those columns as a function call as a PERSISTED. We will look the methods to get the hierarchical data without defining columns as Persisted columns and we will use those function calls at the query level. The scripts for the same are as follows.
-- Creating objects
IF (OBJECT_ID('TblHierarchyStructure','U') > 0)
DROP TABLE TblHierarchyStructure
GO

CREATE TABLE TblHierarchyStructure
(
ItemId INT,
ParentItemId INT,
ItemOrder INT,
ItemName VARCHAR(100),
HierarchyNode HIERARCHYID
)

GO

-- Inseting records in tables for the demo
INSERT INTO TblHierarchyStructure
(ItemId,
ParentItemId,
ItemOrder,
ItemName,
HierarchyNode)
SELECT 1,
NULL,
1,
'RootItem',
HierarchyId::Parse('/')
UNION ALL
SELECT 2,
1,
1,
'FirstItem',
HierarchyId::Parse('/1/')
UNION ALL
SELECT 3,
1,
2,
'SecondItem',
HierarchyId::Parse('/2/')
UNION ALL
SELECT 4,
1,
3,
'ThirdItem',
HierarchyId::Parse('/3/')
UNION ALL
SELECT 5,
2,
1,
'FourthItem',
HierarchyId::Parse('/1/1/')
UNION ALL
SELECT 6,
4,
1,
'FifthItem',
HierarchyId::Parse('/3/1/')
UNION ALL
SELECT 7,
5,
1,
'SixthItem',
HierarchyId::Parse('/1/1/1/')
UNION ALL
SELECT 8,
5,
2,
'SeventhItem',
HierarchyId::Parse('/1/1/2/')
UNION ALL
SELECT 9,
5,
3,
'NinthItem',
HierarchyId::Parse('/1/1/3/')
UNION ALL
SELECT 10,
8,
1,
'TenthItem',
HierarchyId::Parse('/1/1/2/1/')

GO

-- Usinf HierarchyId functions at query level and see output.
SELECT *,
HierarchyNode.ToString() AS ItemNodeString,
HierarchyNode.GetLevel() AS ItemNodeLevel,
HierarchyNode.GetAncestor(1) AS ParentNode,
HierarchyNode.GetAncestor(1).ToString() AS ParentNodeString
FROM TblHierarchyStructure
GO

Now we will look the methods to get the hierarchical data with defining columns as Persisted columns and we will use those functions call at the column level. The scripts for the same are as follows.
-- Creating objects
IF (OBJECT_ID('TblHierarchyStructure','U') > 0)
DROP TABLE TblHierarchyStructure
GO

CREATE TABLE TblHierarchyStructure
(
ItemId INT,
ParentItemId INT,
ItemOrder INT,
ItemName VARCHAR(100),
HierarchyNode HIERARCHYID NOT NULL PRIMARY KEY,
ItemNodeString AS HierarchyNode.ToString() PERSISTED,
ItemNodeLevel AS HierarchyNode.GetLevel() PERSISTED,
ParentNode AS HierarchyNode.GetAncestor(1) PERSISTED,
ParentNodeString AS HierarchyNode.GetAncestor(1).ToString()
)

GO

-- Inserting sample records here
INSERT INTO TblHierarchyStructure
(ItemId,
ParentItemId,
ItemOrder,
ItemName,
HierarchyNode)
SELECT 1,
NULL,
1,
'RootItem',
HierarchyId::Parse('/')
UNION ALL
SELECT 2,
1,
1,
'FirstItem',
HierarchyId::Parse('/1/')
UNION ALL
SELECT 3,
1,
2,
'SecondItem',
HierarchyId::Parse('/2/')
UNION ALL
SELECT 4,
1,
3,
'ThirdItem',
HierarchyId::Parse('/3/')
UNION ALL
SELECT 5,
2,
1,
'FourthItem',
HierarchyId::Parse('/1/1/')
UNION ALL
SELECT 6,
4,
1,
'FifthItem',
HierarchyId::Parse('/3/1/')
UNION ALL
SELECT 7,
5,
1,
'SixthItem',
HierarchyId::Parse('/1/1/1/')
UNION ALL
SELECT 8,
5,
2,
'SeventhItem',
HierarchyId::Parse('/1/1/2/')
UNION ALL
SELECT 9,
5,
3,
'NinthItem',
HierarchyId::Parse('/1/1/3/')
UNION ALL
SELECT 10,
8,
1,
'TenthItem',
HierarchyId::Parse('/1/1/2/1/')

GO

-- We have not using HierarchyId functions at query level
-- and using them at columns level as Persisted
SELECT
*
FROM TblHierarchyStructure

GO

I hope you liked this post about Persisted columns with HierarchyID new datatype. Share your experience if you know this type of the usage.

Wednesday, 10 June 2015

Move Node to other place with HierarchyId Data Type - SQL Server 2008 by serverku

After writing some of the posts related HierachyId data type, Finally moving in last topics of HierachyId functions, we will see here the movement of the hierarchy nodes. Let's start from the script to create data for the demo.
-- Create database and table
CREATE DATABASE HierarchyDB

GO

USE HierarchyDB

GO

IF ( Object_id('HierarchyTab') > 0 )
DROP TABLE HierarchyTab

GO

CREATE TABLE HierarchyTab
(
NodeId INT IDENTITY(1, 1)
,NodeDepth VARCHAR(100) NOT NULL
,NodePath HIERARCHYID NOT NULL
,NodeDesc VARCHAR(100)
)

GO

-- Creating constraint on hierarchy data type.
ALTER TABLE HierarchyTab ADD CONSTRAINT U_NodePath UNIQUE CLUSTERED (NodePath)

GO
-- Inserting data in above creatd table.
INSERT INTO HierarchyTab(NodeDepth,NodePath,NodeDesc)
VALUES
('1',HIERARCHYID::Parse('/'),'Node-1'),
('1.1',HIERARCHYID::Parse('/1/'),'Node-2'),
('1.1.1',HIERARCHYID::Parse('/1/1/'),'Node-3'),
('1.1.2',HIERARCHYID::Parse('/1/2/'),'Node-4'),
('1.2',HIERARCHYID::Parse('/2/'),'Node-5'),
('1.2.1',HIERARCHYID::Parse('/2/1/'),'Node-6'),
('1.2.2',HIERARCHYID::Parse('/2/2/'),'Node-7'),
('1.2.2.1',HIERARCHYID::Parse('/2/2/1/'),'Node-8'),
('1.2.2.1.1',HIERARCHYID::Parse('/2/2/1/1/'),'Node-9'),
('1.2.2.1.2',HIERARCHYID::Parse('/2/2/1/2/'),'Node-10'),
('1.3',HIERARCHYID::Parse('/3/'),'Node-11'),
('1.3.1',HIERARCHYID::Parse('/3/1/'),'Node-12'),
('1.3.2',HIERARCHYID::Parse('/3/2/'),'Node-13'),
('1.4',HIERARCHYID::Parse('/4/'),'Node-14')

GO
The logical image of hierarchy data are as following,


(Click on image to enlarge)
Now we will move hierarchy nodes and it's down-line from one place to another place.

But how to move?


GetReparentedValue(OldNode, NewNode) : It will move all nodes, including itself and down-line as well to another place. Let's see  what should be new place of the hierarchy id "1" and it's down-line nodes after moving to another place.
-- GetReparentedValue()
SELECT
NodePath.GetLevel() AS NodeLevel,
NodePath.ToString() AS NodeCurrentPath,
NodePath.GetReparentedValue(HIERARCHYID::Parse('/1/'), HIERARCHYID::Parse('/4/1/')).ToString()
AS NewNodePath,
-- Above line will give new node path of id 1 and it's downline where it will be placed.
NodeId,
NodeDepth,
NodePath,
NodeDesc
FROM HierarchyTab
WHERE NodePath.IsDescendantOf(HIERARCHYID::Parse('/1/')) = 1

GO

(Click on image to enlarge)


Now run the below query and move the id 1 and down-line of hierarchy id and then see a logical image from data
UPDATE HierarchyTab
SET NodePath = NodePath.GetReparentedValue(HIERARCHYID::Parse('/1/'), HIERARCHYID::Parse('/4/1/')),
NodeDepth = '4.' + NodeDepth
WHERE NodePath.IsDescendantOf(HIERARCHYID::Parse('/1/')) = 1

GO

(Click on image to enlarge)

Hope you understood well and get used this feature for the hierarchical data. Please comments how you are using HierarchyId Data Type and their functions.

Tuesday, 9 June 2015

Is Child Node? - With HierarchyId Data Type in SQL Server 2008 by serverku

Various method I have introduced in my earlier posts, like How to get levels of hierarchy nodes, get up-line and down-line of nodes, get string paths of nodes and get next available nodes.

I hope you have read all of them and you liked too. In this post I am presenting how can we know the node is child of particular node or not? Before going ahead to run the script and see the output of hierarchy structure.
-- Create database and table
CREATE DATABASE HierarchyDB

GO

USE HierarchyDB

GO

IF ( Object_id('HierarchyTab') > 0 )
DROP TABLE HierarchyTab

GO

CREATE TABLE HierarchyTab
(
NodeId INT IDENTITY(1, 1)
,NodeDepth VARCHAR(100) NOT NULL
,NodePath HIERARCHYID NOT NULL
,NodeDesc VARCHAR(100)
)

GO

-- Creating constraint on hierarchy data type.
ALTER TABLE HierarchyTab ADD CONSTRAINT U_NodePath UNIQUE CLUSTERED (NodePath)

GO
-- Inserting data in above creatd table.
INSERT INTO HierarchyTab(NodeDepth,NodePath,NodeDesc)
VALUES
('1',HIERARCHYID::Parse('/'),'Node-1'),
('1.1',HIERARCHYID::Parse('/1/'),'Node-2'),
('1.1.1',HIERARCHYID::Parse('/1/1/'),'Node-3'),
('1.1.2',HIERARCHYID::Parse('/1/2/'),'Node-4'),
('1.2',HIERARCHYID::Parse('/2/'),'Node-5'),
('1.2.1',HIERARCHYID::Parse('/2/1/'),'Node-6'),
('1.2.2',HIERARCHYID::Parse('/2/2/'),'Node-7'),
('1.2.2.1',HIERARCHYID::Parse('/2/2/1/'),'Node-8'),
('1.2.2.1.1',HIERARCHYID::Parse('/2/2/1/1/'),'Node-9'),
('1.2.2.1.2',HIERARCHYID::Parse('/2/2/1/2/'),'Node-10'),
('1.3',HIERARCHYID::Parse('/3/'),'Node-11'),
('1.3.1',HIERARCHYID::Parse('/3/1/'),'Node-12'),
('1.3.2',HIERARCHYID::Parse('/3/2/'),'Node-13'),
('1.4',HIERARCHYID::Parse('/4/'),'Node-14')

GO
Here is the logical image of the above data are as follows,


(Click on image to enlarge)

So our topic here, Is the node Child ?

IsDescendantOf() : This functions will return 1 if the node is child of given node, return 0 if the node is not child.

Let's run the script with example as who are child of hierarchy node "1".
SELECT 
NodePath.GetLevel() AS NodeLevel,
NodePath.ToString() AS NodeStringPath,
NodePath.GetAncestor(1).ToString() AS ParentNode,
NodePath.IsDescendantOf(HIERARCHYID::Parse('/1/')) IsParent,
-- Above line will return 1 or 0
NodeId,
NodeDepth,
NodePath,
NodeDesc
FROM HierarchyTab

GO

You can see the above image and check parent node for appropriate child nodes. Hope you like this. Stay tuned for more.


Monday, 8 June 2015

Next Available Node with HierarchyId Data Type - SQL Server 2008 by serverku


I have posted for the some of the functions with examples and demonstrate them in earlier posts. I have written articles of HierarchyId data type overview, some basic functions, even demonstrate for the up-line & down-line of hierarchy nodes which you can read from below links.

Here I will present how can we get the next available node to be planed with HierarchyID data type function. Let's first create hierarchy data structure with the script.
-- Create a database and table
CREATE DATABASE HierarchyDB

GO

USE HierarchyDB

GO

IF ( Object_id('HierarchyTab') > 0 )
DROP TABLE HierarchyTab

GO

CREATE TABLE HierarchyTab
(
NodeId INT IDENTITY(1, 1)
,NodeDepth VARCHAR(100) NOT NULL
,NodePath HIERARCHYID NOT NULL
,NodeDesc VARCHAR(100)
)

GO

-- Creating constraint on hierarchy data type.
ALTER TABLE HierarchyTab ADD CONSTRAINT U_NodePath UNIQUE CLUSTERED (NodePath)

GO
-- Inserting data in above creatd table.
INSERT INTO HierarchyTab(NodeDepth,NodePath,NodeDesc)
VALUES
('1',HIERARCHYID::Parse('/'),'Node-1'),
('1.1',HIERARCHYID::Parse('/1/'),'Node-2'),
('1.1.1',HIERARCHYID::Parse('/1/1/'),'Node-3'),
('1.1.2',HIERARCHYID::Parse('/1/2/'),'Node-4'),
('1.2',HIERARCHYID::Parse('/2/'),'Node-5'),
('1.2.1',HIERARCHYID::Parse('/2/1/'),'Node-6'),
('1.2.2',HIERARCHYID::Parse('/2/2/'),'Node-7'),
('1.2.2.1',HIERARCHYID::Parse('/2/2/1/'),'Node-8'),
('1.2.2.1.1',HIERARCHYID::Parse('/2/2/1/1/'),'Node-9'),
('1.2.2.1.2',HIERARCHYID::Parse('/2/2/1/2/'),'Node-10'),
('1.3',HIERARCHYID::Parse('/3/'),'Node-11'),
('1.3.1',HIERARCHYID::Parse('/3/1/'),'Node-12'),
('1.3.2',HIERARCHYID::Parse('/3/2/'),'Node-13'),
('1.4',HIERARCHYID::Parse('/4/'),'Node-14')

GO
Hierarchy data structure as imaged as below.


(Click on image to enlarge)

How to we find it?

GetDescendant() : This function will give next available node where we can place new node.

1. GetDescendant(NULL,NULL) : will return default next left node.
2. GetDescendant(LeftNode,NULL) : will return right node next to left node.
3. GetDescendant(NULL,RighNode) : will return left node previous to left node.

Run the following script and see the output.
-- GetDescendant()
SELECT
NodePath.GetLevel() AS NodeLevel,
NodePath.ToString() AS NodeStringPath,
NodePath.GetDescendant(NULL,NULL).ToString() AS NextDefaultNode,
-- Above line will get default node.
NodeId,
NodeDepth,
NodePath,
NodeDesc
FROM HierarchyTab

GO


Looking for one hierarchy node id 13,
SELECT 
NodePath.GetLevel() AS NodeLevel,
NodePath.ToString() AS NodeStringPath,
NodePath.GetDescendant(NULL,NULL).ToString() AS NextDefaultNode,
NodePath.GetDescendant(HIERARCHYID::Parse('/3/2/1/'),NULL).ToString() AS NextRightNode,
NodePath.GetDescendant(NULL,HIERARCHYID::Parse('/3/2/2/')).ToString() AS NextLeftNode,
NodeId,
NodeDepth,
NodePath,
NodeDesc
FROM HierarchyTab
WHERE NodeId = 13

GO

(Click on image to enlarge)

This is what I want to share here and hope you like it.