How to do basic Performance Tuning on Microsoft SQL Server

  • Posted on May 20, 2015 by
  • 1
1)  FIND THE CULPRITS Like other software’s, we need to understand that MS SQL server is also a computer program but a complex program. Here Microsoft written this complex program. So if there is any problem with the SQL server, we need to understand why this program is not running as we expected. From SQL Server we need to pull and push data as fast & accurate as possible. If we face any issues, reasons may be SQL Server (complex program) needs certain hardware and installation settings which we are not providing properly. The way SQL Server implemented and the way it understands T-SQL code, we are not providing proper T-SQL code to it Even though MS SQL Server is a proprietary software, they provided us a lot of ways to understand the Server and what’s going on inside so that we can use it efficiently. If the SQL server is running without errors, first we need to calculate wait statistics of different threads. SQL server uses threads for every user request. Again a thread is nothing but another program inside complex program which is called SQL server (This thread is not Operating system thread on which SQL server installed. This is related to SQLOS thread which is a pseudo operating system for the SQL Server). We can find wait statistics using “sys.dm_os_wait_stats” DMV. There are many scripts online to query this view as per your needs. I like Paul Randal script(http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts) WITH [Waits] AS (SELECT [wait_type], [wait_time_ms] / 1000.0 […] Continue reading ...

Creating log tables to track running processes

In any SQL Server system, you will have jobs that run on a schedule or at specified intervals. In these cases, it’s always nice to keep track of certain aspects over time, so you can compare when things go wrong or how performance has been affected over time. In my experience these are indepsensible when it comes to troubleshooting, and for running delta jobs. Here we’ll show a small example of the log tables you can create to facilitate this. First let’s look at some DDL for 2 different tables: CREATE TABLE [dbo].[ProcessLogMaster](     [process_log_master_id] [INT] IDENTITY(1,1) CONSTRAINT PK_process_log_master PRIMARY KEY CLUSTERED NOT NULL,     [process_master_name] [VARCHAR](100) NOT NULL,     [datetime_start] [datetime] NULL DEFAULT (getdate()),     [datetime_end] [datetime] NULL,     [elapsed_ms] [INT] NULL,     [rows_updated] [INT] NULL,     [rows_inserted] [INT] NULL,     [rows_deleted] [INT] NULL,     [complete] [tinyint] NULL DEFAULT ((0)),     [success] [tinyint] NULL,     [error_description] [VARCHAR](MAX) NULL ) CREATE TABLE [dbo].[ProcessLogDetail](     [process_log_detail_id] [INT] IDENTITY(1,1) NOT NULL CONSTRAINT [PK_process_log_detail] PRIMARY KEY,     [process_log_master_id] [INT] NOT NULL,     [process_detail_name] [VARCHAR](100) NOT NULL,     [datetime_start] [datetime] NULL,     [datetime_end] [datetime] NULL,     [elapsed_ms] [INT] NULL,     [rows_updated] [INT] NULL,     [rows_inserted] [INT] NULL,     [rows_deleted] [INT] NULL,     [complete] [tinyint] NULL,     [success] [tinyint] NULL,     [error_description] [VARCHAR](MAX) NULL ) What we have here are two different tables that can be used to describe job steps. The […]

How to cache stored procedure results using a hash key

There are a lot of different design patterns that lend themselves to creating the shortest path to the data. One of the most efficient is the caching of stored procedure result sets. In order to do this, we need to read the incoming parameters and create a cache key. This cache key is then stored along with the stored procedures result set as a unique identifier representing that combination of the stored procedures parameters. The caveat with this method is that the business requirement needs to allow stale data. There are times where you will need to use values other than the passed in parameters in order to create the cache key. Some examples include datetime data types or keys that are unique (like a customerkey). If the hash that gets created from the parameters is unique, then you will never reuse that dataset again. With this in mind you would even have determine whether the procedure is even cacheable. Another concern to keep in mind is the duration of time you can serve stale data. Maybe 30 seconds, 1 minute, or 1 hour? Any time increment is able to be worked with by clearning the cache tables at the desired interval. Design Let’s look at the basic workflow for how this procedure will work. First of all, we will need to hash all the parameters that are coming into the procedure (unless they are unique in which case we may not be able to cache, or we can possibly […] Continue reading ...

Overcoming Issues with TempDB Contention

One of the possible bottlenecks for a busy SQL Server is contention in TempDB. This symptoms of this bottleneck are mostly excess wait_types of PAGELATCHEX that are blocking one another. (you can use sp_who3 in order to see whether those wait types are popping up during your busy times). The cause of this is either implicit or explicit use of TempDB. Explicit being creation of table variables and temp tables, or implicit being worktables being generated by complex execution plans. Luckily there are a number of steps you can take in order to overcome these issues. 1) Enable traceflag 1118 There is some argument as to whether this trace flag actually makes a difference. first of all, MS SQL CSS engineers state it’s already the default for 2008. What the flag does is immediately allocate space for tempdb tables upon creation rather than on insert (if I understand it correctly). It did seem to help our database during one point of contention, but I’m not willing to test it again by turning it off. So with that said, “it can’t hurt to turn it on” at least that’s that the consensus of what I’ve read. 2) Put TempDB on a PCIE SSD The PCIE cards are getting cheaper every month. Just putting this in, reduced our write speed from 2.3ms to .3ms. What that translates into is, it’s quicker to get “in and out” of tempdb allowing less contention and higher throughput. Also while I’m too lazy to prove it, […]

CTE’s vs Temp tables – an Opinion

This is one of those topics that will get people fired up. But here goes. I am mostly an explicit temp table developer. By contrast, I am not an implicit temp table developer. What I mean by that is – in writing SQL, you simply cannot avoid the use of tempdb. Either you write to tempdb by breaking queries out and intentionally create temp tables, or you write to tempdb by not breaking queries out (keeping them as long convoluted statements with a long convoluted query plan) and let the optimizer create “worktables”. In either case you are writing to tempdb whether you like it or not. Yet.. the difference is this: Breaking them out: You can control the size of the result set being written to disk You can ensure that the execution plan is simple You can utilize the materialized temp table data throughout the entire procedure Temp tables contain statistics and can be indexed To compare temp table development to CTE development is somewhat of an apples and oranges comparison. A CTE uses nothing special on the back end. It is simply a (potentially) clean way to write a query. The difference is this however. With a CTE, the execution plan of the main query becomes intertwined with the CTE, leaving more room for the optimizer to get confused. By contrast, when a temp table divides two queries, the optimizer is not intertwined with that which created the temp table and the execution plans stay simple and […] Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!