Compare Stored Procedure Output by Creating a Test Harness

When optimizing procedures often times you need to rewrite the procedure in order to speed it up. However, when you’re finished how can you be sure that the result set is still the same? This article will show you how to run a thorough test to make sure. We’ll execute the old and new stored procedure then compare their result sets by using a checksum aggregate. We’ll also compare rowcounts and elapsed execution time. The benefit of this test harness is that it will allow you to loop through and execute the procedures with as many dynamic parameter values as you wish, then compare the result set of the old procedure vs the new procedure for each execution. First, we need to do a couple things to set up our environment. Setup a local linked server Create a wrapper procedure We need the local linked server in order to dynamically retrieve the result set of the stored procedure using OPENQUERY. We could skip this step and create the temp table explicitly if we knew all the columns and data types being returned, but sometimes this is a hassle because there could be many columns that are returned and explicitly creating a temp table for each procedure we wanted to test is a pain. And that’s probably why you’re here in the first place isn’t it? So to create a local linked server, do the following: [cc lang=”sql”] EXEC sp_addlinkedserver @server=’LOCALSERVER’, @srvproduct=’SQLSERVER’, @provider=’SQLNCLI’, @datasrc=’Your_Server_Name_Here’ [/cc] For this example, the server name […] Continue reading ...

Avoiding IF.. ELSE by using WHERE EXISTS

When coming from a programming background, it’s natural to want to frame conditional operations within the familiar if.. else constructs. This happens in a lot of SQL code I have worked with (and I used to be a contributor also). The technique below is based on a common scenario. Say you want to insert new records that do not exist or update records that do exist. The familiar programming way of thinking typically leads to the following method: [cc lang=”sql”] CREATE PROCEDURE p_customer_totals_ins_upd @cust_id int, @order_amt numeric AS IF NOT EXISTS (SELECT 1 FROM customer_totals WHERE cust_id = @cust_id) BEGIN INSERT INTO customer_totals ( cust_id, order_amt ) SELECT cust_id = @cust_id ,order_amt = @order_amt END ELSE UPDATE customer SET order_amt = order_amt + @order_amt WHERE cust_id = @cust_id END [/cc] The problem with the above method is that it does not allow for concurrency. Meaning, if this procedure is called through two different connections at the exact same time, the chances are good the same cust_id will be inserted into the customer_totals table twice. This is because the IF NOT EXISTS statement operates in a separate transaction as the insert statement. So then the answer should be to wrap a BEGIN TRAN around the whole thing right??… No please don’t.. This will only increase the chance of deadlocks in your system.. and there’s a better way. Using NOT EXISTS for Inserts The answer comes by understanding that SQL statements by themselves are a transaction. So within the same transaction as […]

DROP INDEX SYNTAX

Here is the syntax needed in order to drop a single index: [cc lang=”sql”] USE YourDatabaseName; DROP INDEX IX_Product_1 ON dbo.Product; [/cc] You can also drop multiple indexes within a single transaction: [cc lang=”sql”] USE YourDatabaseName; DROP INDEX IX_Product_1 ON dbo.Product, IX_Customer_1 ON dbo.Customer; [/cc] Permissions The permissions needed to drop an index are alter permissions to the table. This is inherent in dbo, sysadmin, and ddl_admin fixed server roles. Locking / Blocking When dropping an index, a schema manipulation lock is acquired for the duration of the delete which is not compatible with any other type of lock so this operation will cause blocking. This is required in order to notify all dependent execution plans they need to recompile. Even using an online index hint will not stop the schema manipulation lock. The only time the online index hint will help is when dropping the clustered index (moving it to a heap). Continue reading ...

SET TRANSACTION ISOLATION LEVEL

This statement is used to set the isolation level for either a connection or a stored procedure. The most typical use I’ve seen is at the top of a stored procedure in order to avoid locking and deadlocks. This is a cleaner alternative to using WITH (NOLOCK) hints on tables. If you set the isolation level within a procedure, the isolation level will revert to its previous level once the procedure finishes. The syntax is: [cc lang=”sql”] SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; [/cc] The available options are: READ UNCOMMITTED – Allows dirty reads, meaning data can be read from a transaction that is not yet complete. READ COMMITTED – Only reads data from transactions that have already been committed.  Current transactions must wait for transactions referencing common resources to finish. REPEATABLE READ – Data that is being read is exclusively locked until the transaction completes. SNAPSHOT – The default for Oracle.  This level allows all data to be read without using locks by maintaining a snapshot of all the data being modified in “versioning” tables.  This is the only isolation level not lock based. SERIALIZABLE – Data that is being read is exclusive locked and inserts are prevented within this range until the transaction completes. Example This statement needs to be placed before the transactions who isolation you want to modify. [cc lang=”sql”] SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT firstname ,salary FROM dbo.employee e JOIN salary s ON s.employeeid = s.employeeid; [/cc] The comparable method using hints would […]

How to Create a Type2 SCD (Slowly Changing Dimension)

This article could just as well be called creating a historical snapshot table. This type of table is also referenced as a dimension depending on what kind of data repository it’s located in. Personally, I prefer to keep a historical snapshot table in a normalized data store that contains history. This normalized data store is typically the first stopping point from the source system. It is useful because it keeps historical snapshots of what the data looked like in the source system at any point in time. To get started, let’s create a history table we will use to store the historical values. From above, we see that we have 4 additional columns: Person_HistoryID – this is a surrogate key specific to our new table. ChkSum – contains a CHECKSUM of all the columns used compare data discrepencies. DateTime_From – the beginning date in which this record is effective. DateTime_To – the ending date in which this record is no longer effective. First, let’s create our sample source table and populate it with some data [cc lang=”sql”] CREATE TABLE Person( PersonID int IDENTITY(1,1) NOT NULL, Title nvarchar(8) NULL, FirstName nvarchar(50) NOT NULL, MiddleName nvarchar(50) NULL, LastName nvarchar(50) NOT NULL, EmailAddress nvarchar(50) NULL, Phone nvarchar(25) NULL, ModifiedDate datetime NOT NULL ) SET IDENTITY_INSERT [dbo].[Person] ON INSERT [dbo].[Person] ([PersonID], [Title], [FirstName], [MiddleName], [LastName], [EmailAddress], [Phone], [ModifiedDate]) VALUES (1, N’Mr.’, N’Gustavo’, NULL, N’Achong’, N’[email protected]’, N’398-555-0132′, CAST(0x000096560110E30E AS DateTime)) INSERT [dbo].[Person] ([PersonID], [Title], [FirstName], [MiddleName], [LastName], [EmailAddress], [Phone], [ModifiedDate]) VALUES (2, N’Ms.’, N’Catherine’, N’R.’, […] Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!
css.php