Avoiding IF.. ELSE by using WHERE EXISTS

When coming from a programming background, it’s natural to want to frame conditional operations within the familiar if.. else constructs. This happens in a lot of SQL code I have worked with (and I used to be a contributor also). The technique below is based on a common scenario. Say you want to insert new records that do not exist or update records that do exist. The familiar programming way of thinking typically leads to the following method: [cc lang=”sql”] CREATE PROCEDURE p_customer_totals_ins_upd @cust_id int, @order_amt numeric AS IF NOT EXISTS (SELECT 1 FROM customer_totals WHERE cust_id = @cust_id) BEGIN INSERT INTO customer_totals ( cust_id, order_amt ) SELECT cust_id = @cust_id ,order_amt = @order_amt END ELSE UPDATE customer SET order_amt = order_amt + @order_amt WHERE cust_id = @cust_id END [/cc] The problem with the above method is that it does not allow for concurrency. Meaning, if this procedure is called through two different connections at the exact same time, the chances are good the same cust_id will be inserted into the customer_totals table twice. This is because the IF NOT EXISTS statement operates in a separate transaction as the insert statement. So then the answer should be to wrap a BEGIN TRAN around the whole thing right??… No please don’t.. This will only increase the chance of deadlocks in your system.. and there’s a better way. Using NOT EXISTS for Inserts The answer comes by understanding that SQL statements by themselves are a transaction. So within the same transaction as […]

Continue reading ...

SET TRANSACTION ISOLATION LEVEL

This statement is used to set the isolation level for either a connection or a stored procedure. The most typical use I’ve seen is at the top of a stored procedure in order to avoid locking and deadlocks. This is a cleaner alternative to using WITH (NOLOCK) hints on tables. If you set the isolation level within a procedure, the isolation level will revert to its previous level once the procedure finishes. The syntax is: [cc lang=”sql”] SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; [/cc] The available options are: READ UNCOMMITTED – Allows dirty reads, meaning data can be read from a transaction that is not yet complete. READ COMMITTED – Only reads data from transactions that have already been committed.  Current transactions must wait for transactions referencing common resources to finish. REPEATABLE READ – Data that is being read is exclusively locked until the transaction completes. SNAPSHOT – The default for Oracle.  This level allows all data to be read without using locks by maintaining a snapshot of all the data being modified in “versioning” tables.  This is the only isolation level not lock based. SERIALIZABLE – Data that is being read is exclusive locked and inserts are prevented within this range until the transaction completes. Example This statement needs to be placed before the transactions who isolation you want to modify. [cc lang=”sql”] SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT firstname ,salary FROM dbo.employee e JOIN salary s ON s.employeeid = s.employeeid; [/cc] The comparable method using hints would […]

Continue reading ...

Auto Update Statistics & Auto Create Statistics

The ability to create and update statistics is a crucial liberty for the query optimizer. The optimizer needs to know how it is going to query underlying tables, and when the statistics do not match the data, it is very probable that it will choose a non-efficient method for querying. I’ve seen and heard a lot of debate from DBA’s who think they should turn this off. Of all those DBA’s I think there is one who was correct, and he ended up convincing me of his scenario so I’m not as closed minded as I was before. His server was so bottle-necked during peak time, that he updated the stats off hours. I’m very weary of those who blindly say “it’s better to turn it off” without any factual statistics to back them up. Most of the DBA’s that think they should turn it off are stuck in the SQL Server 7.0 days. At that time, turning this feature off was acceptable because they interfered with the currently running queries. Example Case in point, we had a DBA who did turn this feature off in an environment that mirrored production. I overheard the developers complaining how their queries took twice as long on this box. Looking at perfmon, I saw no physical bottlenecks. Since this was the case, I turned to statistics and found both auto update and auto create turned off. After turning it back on, the box was just as fast as production. Long story short, these […]

Continue reading ...

Using DBCC UPDATEUSAGE

When DBCC UpdateUsage is run, page and row counts are updated. When this information is incorrect, it can give incorrect reports for how much space is used by tables and indexes. The most likely cause for incorrect usage stats is when a database is upgraded to a new compatibility level of SQL Server (i.e. 2000 to 2005). Therefore it’s important to run this command after the statistics have been updated. It may not be a bad idea to run on a weekly or monthly schedule if your database undergoes frequent DDL modifications. To execute you can run one of the following commands: [cc lang=”sql”] Use MyDB; GO DBCC UPDATEUSAGE(0); — Execute for the current database GO DBCC UPDATEUSAGE(MyDB); — Execute using the database name GO DBCC UPDATEUSAGE(MyDB); — Execute using the database ID [/cc] This may take some time, so make sure to run it in a non production environment or during off hours as it may cause blocking. You can also run for a specific table or index. If you run for a table, then the usage stats are updated for all indexes on that table. [cc lang=”sql”] — Update for a table (and it’s indexes) DBCC UPDATEUSAGE(AdventureWorks, ‘Sales.SalesOrderDetail’); — Update usage for a single index DBCC UPDATEUSAGE(AdventureWorks, ‘Sales.SalesOrderDetail’, ‘IX_SalesOrderDetail_ProductID’); [/cc] That’s basically it. Now you may be running these commands and are not seeing any output. Well, that’s because the usage stats are already correct. But if you want to spoof the stats in order to see the […]

Continue reading ...

Execution Plans

The execution plans SQL Server creates and uses is a huge topic with many intricacies. Now I have never spoken to anyone on the SQL Server Development team, however I would imagine that there are some extremely sharp people in the query optimization team. In a nutshell, this is where the magic happens in SQL Server. Overview An execution plan is a behind-the-scenes look at the road a query takes in order to deliver it’s final result. They are generated from the underlying data statistics combined with what the query and it’s parameters are trying to accomplish. When the initial query is read, the execution plan generation engine or “Query Optimizer” searches for the best way to deliver the results of the query in the quickest way possible. To do this, it needs to know what the underlying data looks like. This is facilitated by the statistics that are stored for each table, column, and index. With these statistics in hand, the goal is to eliminate the largest number of records as quickly as possible, and iterate through this process until the final result is delivered. That said, it is not an easy job. There are many variables that come into play when determining a query’s path. A few of these include the selection of indexes, join algorithms, join order, parallelism. Displaying the Plan As SQL developers, it is our job to understand how to read execution plans so we can see how the query is being executed. To view […]

Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!
css.php