7 Simple Performance Tuning Tips And Tricks In SQL Server

7 Simple Performance Tuning Tips And Tricks In SQL Server

We all come across a lot of applications in our life. Also, there are a lot of places we admire at the application performances as well. The main factor for any application to perform at blazing speed would be to improve the database performance in the first place. Database plays a vital role in application performance. If the front-end is well-developed but the database is sluggish, then the whole effort of development would go for a toss. In such cases, performance tuning comes to the picture. Additionally, while we are increasing the performance of a query, we should start with the basic and simple steps. Always it’s the best practice to check the simple and the simplest of the tuning before even stepping into the advanced indexing strategy.

While I have worked on many queries to improve the performance with some of my junior folks, the first intent they get is adding up an index. To be honest, adding an index can degrade the performance. However, I will be sharing performance tuning for indexes in the future.

Further, through my course of improving queries, I felt that these 7 simple tips and tricks can help to a greater extent. Let’s check the 7 performance tuning tips and tricks for our SQL Server database that can probably avoid problems.

1. Avoid SELECT * FROM

The SELECT * FROM is a sin in SQL while considering the performance. It is going to be definitely a bad thing if you are seriously working on developing a full-fledged application for a high-traffic interface. The SELECT * FROM is a Hitman for SQL Optimizer. There is no chance for the SQL Optimizer to choose the underlying indexes for a SELECT * statement.

Area of Tuning: 

Try to SELECT the column names needed for the data retrieval. This is going to help avoid disk, network, and memory contention on your server.

2. Caching In Application

If your application is randomly hit by multiple users across the globe and the data retrieval is happening directly from the server, then it is a bad idea. Every time a user retrieves the data from the database, there is going to be a huge amount of disk rotation and network contention. Additionally, this phenomenon can cause the server to go slow eventually.

Area of Tuning: 

Encouraging to cache in the application memory, to avoid constant queries to the database. This caching method can be implemented on infrequent updates and small sets of data such as lookup values.

3. Keep Transactions Short

This happens in many industries wherein the queries are Adhoc. These Adhoc queries could be to fix the application problems such as data retrieval and other reporting strategies. Basically, some of the reporting queries can run so long that the entire server’s CPU is eaten up. In fact, this is going to ruin the entire server’s performance. 

Area of Tuning: 

Try to shorten the query execution time and keep them simple. Longer queries are going to take way long locking on tables. The higher locking, the more the next user waits for accessing the table. Along with the lock waits, there could be a spike in CPU usage if the transactions are running for a longtime. Use a precise locking strategy to overcome blocking in SQL.

4. Avoid NOLOCK Hint

It is very general for most of the DBAs working on SQL environment to use the NOLOCK hint on big tables on a production server to avoid blocking. Usage of NOLOCK hint breaks the fundamental rules of data integrity. Additionally, using the NOLOCK hint can lead to missing or extra rows in data sets, which is a huge abuse to SQL. 

Area of Tuning: 

We have an alternate version to deal with the NOLOCK hint introduced by Microsoft, a snapshot isolation level. Using the command such as READ_COMMITTED_SNAPSHOT isolation level on top of any of your queries can simply solve most of the SQL overhead problems. Further, this can as well solve the missing or extra rows in data sets.

5. Use Variable And Parameter Data Types Same As Columns

This can be something interesting for people on how the data type slow the performance of a query. While I was testing the data import on a stored procedure on one of my optimization projects, there seemed a mismatch in data type between the declared variable and the columns. When the data is imported from a variable to a column with different data types, there is going to conversions happening. This can hit the SQL optimizer to a greater extent.

Area of Tuning: 

Try to keep both the variable/parameter data types with respect to columns always the same. This can help avoid table scans and as well void the cause of data conversions.

6. Avoid Usage Of DISTINCT, ORDER BY Or UNION Unnecessarily

It becomes very common for us to eliminate the duplicates using the DISTINCT function. A DISTINCT can be a killer for SQL. Additionally, the ORDER BY is going to sort your query, and internally optimizer is thinking of more logical ways to do that. This in turn of sorting your query will be a time taking job. Further, the UNION is one simple way to combine results. The usage of UNION can just slog the server over time. 

Area of Tuning:

DISTINCT: Instead of using the DISTINCT function, try with GROUP BY clause at the end of your query to eliminate duplicates. The reason for this changeover is simple and clear that the SQL is going to do a bit of more job internally if used as DISTINCT. This can be viewed in a precise manner while using the DISTINCT on big queries. 

ORDER BY: Now it's going to be your call on either to use a GROUP or BETWEEN operator to get the range. The logic on your query should be optimal enough to make the SQL feel much better.

UNION: Please refer to the Advantagesof UNION ALL Over UNION for your solution of replacement.

7. Minimize CURSOR and WHILE LOOP Usage

CURSORs and WHILE LOOPs usage may be one of the Database developers or DBA’s personal choice. But as per my research on the CURSORs and WHILE LOOPs, the performance was degrading over time. This is because the CURSOR and WHILE LOOP go with row by row basis and it is time-consuming task. If you are developing your application that needs a faster result and are working on CURSOR and WHILE LOOP basis, then it’s a bad idea.

Note: Usage of CURSOR/WHILE LOOP can be seen with good performance on small data sets. 

Area of Tuning: 

Try to use the sp_msforeachdb system stored procedure. The sp_msforeachdb is commonly avoided by some junior folks. This is because it becomes a tedious job to go with the internal coding of sp_msforeachdb stored procedure. But, please check in the long run of your application and it would be much appreciated.


KILL is one of the common commands that we from the database side work on a regular basis. It would be obvious thinking for some of the SQL users to come up with a statement as “KILL using WITH STATUSONLY is not killing the session”.

The WITH STATUSONLY by the name is going to show us the status of the session ID passed along with the KILL command. Additionally, the status will be shown for the corresponding session ID only when your session is in the rollback status.

For example, just imagine that you killed an ALTER

Let’s check a simple example of killing a session WITH STATUSONLY

I have created a table and imported bulk records into a table as “Kill_test”. After inserting the records into the “Kill_test” table, I will be selecting the data as:

SELECT * FROM Kill_test

While the data is being selected, simultaneously I will be running the KILL WITH STATUSONLY command as below. In my case, the session ID is 110.


In this case, we may notice error as mentioned below.


This happens because the KILL (spid) WITH STATUSONLY in our case was a SELECT command. 

The WITH STATUSONLY gathers and shows status of the session only when the session is in the rollback state.

Fix and Bottomline:

If you feel that your session is not being killed using the WITH STATUSONLY command, then it’s a misconception of killing the session ID. Please use the KILL (spid) alone to finish the job.

KILL (spid)


You may also refer: "How to kill a SPID in SQL Server"

For your testing purposes

Please try to create a clustered index on a heap table (which doesn’t consist of a clustered index) consisting of a huge number of records. Before performing this action, please make sure to run the following in the development environment. After a minute of execution time, kill the session. The killed session would be in the KILLED/ROLLBACK state. Now, once again run the KILL (spid) WITH STATUSONLY to check the current status of the killed session.

The Business Intelligence Semantic Model In SQL Server

The Business Intelligence Semantic Model In SQL Server

The Business Intelligence Semantic Model is the latest model created by Microsoft that supports all the applications in BI stacks. Additionally, the intent of such a model is to provide efficient, rich, and scalable analytical capabilities. The important highlight of the BISM model is its integration of data from heterogeneous data sources like LOB applications, relational databases, and unconventional sources like cloud, Excel, or text files. Further, the BISM comes with a three-layer architecture: the data model, business logic and queries, and data access

Data Model:

Now, it’s the choice of the developers or analysts working in BISM architecture to choose either a multidimensional data model or a tabular one. Additionally, the integration of the model can also be in an Analytical Services server or PowerPivot.

Business Logic and Queries:

Yet another choice for the analysts and developers to choose between the Multidimensional Expressions (MDX) and Data Analysis Expressions (DAX).

MDX – The MDX is based on the multidimensional concepts that are like query language for online analytical processing (OLAP). Further, the OLAP uses a database management system. And the language can also be used for calculation purposes like spreadsheet formulas.

DAX – The DAX is based on the tabular concepts which are a native query language for Microsoft PowerPivot, Power BI and SQL Server Analytical Services (SSAS). Some of the similar formulas used in the Excel is as well used in the DAX for formulating purposes. The purpose of DAX is to facilitate with simple, reliable and flexible feature while encroaching the PowerPivot and SSAS tabular models.

Data Access Layer:

The data access layer merges the data from multiple data sources such as relational databases like the DW, files line of business (LOB) applications. The data can be either server locally or can be passthrough directly from the data sources. Further, serving locally is called the “Cached mode” and passthrough is called the “pass-through mode”.

Cached Mode: The cached mode can use either of the two storage engines. One that comes with the MOLAP that supports the multidimensional model and the other latest engine is “VertiPaq”.

Pass-through Mode: By the name just accept the data without saving the data from the data source. However, the ROLAP and DirectQuery is the complementing model to the cached mode counterpart. 

Interview questions you may be interested:

"Difference Between Clustered Index And Non-Clustered Index In SQL Server"

"Difference Between Shared Lock, Exclusive Lock And Update Lock In SQL Sever"

Using Dynamic Pivot Tables In SQL Server

PIVOT relational operator is one of the best features that the SQL Server consists of. PIVOT revolves around a table consisting of unique values into multiple columns. A lot of reporting companies, Data Warehouse and analytical consultancies use the PIVOT for their industrial purposes. A simple pivot just rotates the table value to multiple columns. However, if we need the values for pivot column to be automated, then dynamic PIVOT is the best option for us.

We are going to learn about using the dynamic pivot in our SQL tables.

In my example, I am going to create a table and insert the values about the list of table fragmentations. Let’s check the steps on performing a dynamic PIVOT.

Step 1: Inserted 10 random of my database fragmentation records into #1 temp table

select * from #1

Step 2: Implementing a dynamic PIVOT on #1 temp table

We are going to pivot on [date] field column, taking into account the max function on (fragmentation) column.

DECLARE @Pivot_Column [nvarchar](max); 
DECLARE @Query [nvarchar](max); 
SELECT @Pivot_Column= COALESCE(@Pivot_Column+',','')+ QUOTENAME(date) FROM 
SELECT @Query='SELECT TableName,DatabaseName, '+@Pivot_Column+'FROM  
(SELECT TableName, [date] , DatabaseName,Fragmentation  FROM #1   )Tab1 
MAX(Fragmentation) FOR [date] IN ('+@Pivot_Column+')) AS Tab2 
ORDER BY Tab2.TableName' 
EXEC   (@Query)

Ordering Dynamic PIVOT column

Currently, I have just one data in my temptable for all 10 records. But, imagine if you need to dynamically PIVOT for multiple date field columns and they should be in an ordered manner. Then, the order by clause should be added to the PIVOT parameter as shown in the code below.

DECLARE @Pivot_Column [nvarchar](max); 
DECLARE @Query [nvarchar](max); 
SELECT @Pivot_Column= COALESCE(@Pivot_Column+',','')+ QUOTENAME(date) FROM 
(SELECT DISTINCT [date] FROM #1 )Tab Order By date
SELECT @Query='SELECT TableName,DatabaseName, '+@Pivot_Column+'FROM  
(SELECT TableName, [date] , DatabaseName,Fragmentation  FROM #1   )Tab1 
MAX(Fragmentation) FOR [date] IN ('+@Pivot_Column+')) AS Tab2 
ORDER BY Tab2.TableName' 
EXEC   (@Query)

Find Last Update Statistics Date In SQL Server

I was working on a query to improve the performance and I happened to find the Update Statistics on the table outdated. Update Statistics play an important role in query performance. If you are a DBA or a database developer, the update statistics is one common term you keep listening to every day. Additionally, one fantastic feature we have in SQL Server is the jobs. We can always have the Update Statistics updated by scheduling jobs.

Today, we will be checking the last updated statistics date using the system view - sys.indexes. Before checking the last updated statistics date, let’s check the indexes we have on the desired table.

I have created a dummy table as “Hospital”. I will be checking the indexes on that table with the below command.

sp_helpindex Hospital

Last Update Statistics SQL Server 1

Once you have the list of indexes from the sp_helpindex command for your information, let’s check the last updates statistics date with the below query.

SELECT name AS Index_Name,
STATS_DATE(OBJECT_ID, index_id) AS Last_UpdateStats
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('your table name')

Last Update Statistics SQL Server 2

NOTE: If you have created a new table and notice NULL values in the date field, then the SQL Server never created a statistic blob on your table. The statistic blobs are not created for new tables, empty tables, or filtered statistics. Due to these reasons, the results can go NULL for you and the predicate does not return any rows as results. 

If any of your statistics do not match the indexes of your table, then they appear in the sys.stats instead sys.indexes.

SELECT Stats_ID, [name] AS Stats_Name,
STATS_DATE(object_id, stats_id) AS [Stats_Date] 
FROM sys.stats
WHERE s.object_id = OBJECT_ID('your table name');  

Compare Columns Of Two Tables In SQL Server

I was working on a project to check and compare columns of a table with another table to make sure the structure was the same. While we DBAs work regularly on multiple databases, we have a general tendency to use sp_help system procedure or directly use ALT + F1 keys to check the table structure. But if we must check the table structure of multiple tables manually, then such methods should be ruled out.

I have a quick script to help you with comparing columns of two tables using information_schema (System Information Schema Views).

A simple format to compare columns of two tables:

db_name() AS DBNAME,
WHERE table_name = 'table1'
( SELECT column_name
WHERE table_name = 'table2'


In my example, I have created two tables as test1 and test2. 

Table structure - test1  looks like:

Compare Columns Two Tables SQL Server 1

Table structure - test2  looks like:

Compare Columns Two Tables SQL Server 2

Now, let's just the query to check the mismatch between the two tables and its output.

db_name() AS DBNAME,
WHERE table_name = 'test1'
( SELECT column_name
WHERE table_name = 'test2'

Compare Columns Two Tables SQL Server 3

The above result gives us the missing column in test2 when compared with the test1 table. 

FlushCache Message In SQL Server

The FlushCache message came into notice for me when my scheduled backup jobs failed due to error - “FlushCache: cleaned up bufs”. FlushCache is the drill and pattern performed by SQL Server for carrying out the checkpoint operation. Additionally, the FlushCache is one such situation that is not experienced frequently by database administrators.

FlushCache Message SQL Server 3

Just imagine if we are trying to fetch a webpage but for some reason, we are unable to fetch the webpage for quite long and get the error as “404 – Page not found”. A similar situation can be compared here, wherein the SQL Server has a regular checkpoint logging. A checkpoint logging happens on the transaction log file that pushes to the hard disc. The basic intent of the checkpoint is to reduce the recovery time in the system failure event.

Further, the FlushCache message is recorded in the SQL Errorlog only while enabling the SQL Server Trace Flag – 3504. The Trace Flag – 3504 is responsible for recording and writing the checkpoint internal activity information in the SQL Server error log.

The Trace Flag – 3504 had to be enabled as a mandate operation prior to SQL Server 2012. The operation can be seen recorded in the error log by default in SQL Server versions after 2012. So, if you are not noticing them without enabling the Trace Flag then here is the guide for you.

You can check the Trace Flag status using the below command:

use <your database name>

dbcc tracestatus(-1)

You can enable the Trace Flag with the command:

dbcc TRACEON( 3504, -1)

Once the Trace Flag is enabled, your SQL system is now ready to record the FlushCache message in the SQL Server error log. Additionally, once the Trace Flag is enabled, you can confirm them by again using my first command – dbcc tracestatus(-1)  


So normally saying, your system should be fine but sometimes could cause due to SQL Server overload. Further, most of the cases, the FlushCache erroring resolves by itself. But if you still keep noticing them, then please follow the below commands:

1. Check for all the disk drives and confirm that they are not filled over 85% (Per my assumption). Try to clear some space.
Run a complete check of the SQL statements running on your system and figure out the query that is spinning the I/O.
Run a complete throughput of your system performance.

According to the scenarios I have faced, the commands above should settle your system in peace. If you are facing any additional errors along with FlushCache that you are unable to resolve, please send a comment in the comment section and I can immediately respond with possible fixes.

User-Defined Function To Split String In SQL Server

The split string is one required functionality implementation while fledging your application. The user-defined function is useful if you are using a SQL Server machine older than 2016. The main use of the function is to convert a single string of row data into split table values in the form of rows. The string can be delimited either by a comma or any special character in accordance with your data.

Further, if you are using SQL Server version2016 and above, please use the STRING_SPLIT function for better performance. Additionally, if you face any difficulties in using the STRING_SPLIT function that possess error as – 'Invalid Object Name STRING_SPLIT', please refer the details on the link for the fix.

Mostly, we have strings separated by commas in our majority usages. However, we will see the examples of user-defined functions accepting just comma delimiters as well as the ones accepting special characters between the string values.

Split String Function (Accepting the only comma delimiters)

Step 1: Create a function as below:

CREATE FUNCTION Fn_split_string_OnlyComma
@in_string VARCHAR(MAX),
@delimiter VARCHAR(1)
RETURNS @list TABLE(tab_splitstring VARCHAR(100))
WHILE LEN(@in_string) > 0
INSERT INTO @list(tab_splitstring)
SELECT left(@in_string, charindex(@delimiter, @in_string+',') -1) as tab_splitstring
SET @in_string = stuff(@in_string, 1, charindex(@delimiter, @in_string + @delimiter), '')

Step 2: Select from the function – Fn_split_string_OnlyComma

select * from Fn_split_string_OnlyComma ('mango,orange,pineapple',',')

Note: If you are using any special character apart from comma character is going to error out.

Split String Function (Accepting any special character delimiters)

Step 1: Create a function as below:

CREATE FUNCTION [dbo].[fn_Split_String_AnySCharacter]
@string NVARCHAR(MAX),
@delimiter CHAR(1)
DECLARE @start INT, @end INT
SELECT @start = 1, @end = CHARINDEX(@delimiter, @string)
WHILE @start < LEN(@string) + 1 BEGIN
IF @end = 0
SET @end = LEN(@string) + 1
INSERT INTO @output (splitdata)
VALUES(SUBSTRING(@string, @start, @end - @start))
SET @start = @end + 1
SET @end = CHARINDEX(@delimiter, @string, @start)

Step 2: Select from the function - fn_Split_String_AnySCharacter

Examples of splitting the string with ‘#’ special character:

select * from [fn_Split_String_AnySCharacter] ('mango#orange#pineapple','#')

User Defined Function Split String SQL Server 2

Examples of splitting the string with ‘?’ special character:

select * from [fn_Split_String_AnySCharacter] ('mango?orange?pineapple','?')

User Defined Function Split String SQL Server 4

Finally, an example of splitting the string with ‘,’ special character:

select * from [fn_Split_String_AnySCharacter] ('mango,orange,pineapple',',')

User Defined Function Split String SQL Server 5

Example of using multiple special characters in the string:

While passing more than one special character in the string, the user-defined function we created is going to accept just the special character we mentioned in the @delimiter parameter. 

select * from [fn_Split_String_AnySCharacter] ('mango,orange%pineapple',',')

User Defined Function Split String SQL Server 6

You'll also like:

"CHOOSE Function"
"Difference Between Clustered And Non-Clustered Index In SQL Server"
"Advantages Of UNION ALL Over Union"