Using IDENTITY_INSERT

The only way to insert values into a field that is defined as an “IDENTITY” (or autonumber) field, is to set the IDENTITY_INSERT option to “ON” prior to inserting data into the table. To illustrate, let’s create a table that has an identity column defined. [cc lang=”sql”] — Create table with identity column CREATE TABLE MyOrders ( OrderID int IDENTITY(1,1), ProductName varchar(20) ); — Now try to insert into the table INSERT INTO dbo.MyOrders ( OrderID, ProductName ) VALUES ( 1, ‘socks’ ); [/cc] Executing the above code, we see that we get the following error: Cannot insert explicit value for identity column in table ‘MyOrders’ when IDENTITY_INSERT is set to OFF. To fix this, we simply need to set the IDENTITY_INSERT to On. Keep in mind, the minimum permissions needed to perform this action is database owner (dbo). [cc lang=”sql”] SET IDENTITY_INSERT dbo.MyOrders ON; INSERT INTO dbo.MyOrders ( OrderID, ProductName ) VALUES ( 1, ‘socks’ ); SET IDENTITY_INSERT dbo.MyOrders OFF; [/cc] You want to make sure to turn this off after it’s used. Otherwise, you will not be able to turn this feature on for any other table from within the same session until it is off for this table. Continue reading ...

Monitor Queries against a Table

I recently had a need to monitor inserts against a particular table in order to determine what kind of locks they were acquiring. Being that we could not run traces on the system, I had to resort to a roll-your-own approach to monitoring the table. The goal was to determine what kind of locking was occurring, and it would also be nice to be able to associate it to the executing query. The following code uses DMVs in order to trap a sample of the queries running against it. Please note that this will not trap all queries, however it will work in getting a good number of samples. This script will run in an endless loop so be sure to hit cancel at some point. Also, for some reason the SQL Statement will not always be trapped. It was not that important for me, as I mostly needed the locks, however if someone figures it out, please post. [cc lang=”sql”] — Capture query activity against a table using DMVs DECLARE @TableName varchar(255); — Specify the table you want to monitor SET @TableName = ‘Sales.SalesOrderDetail’; DECLARE @ObjectID int; SET @ObjectID = (SELECT OBJECT_ID(@TableName)); IF OBJECT_ID(‘tempdb..##Activity’) IS NOT NULL BEGIN DROP TABLE ##Activity; END; — Create table SELECT TOP 0 * INTO ##Activity FROM sys.dm_tran_locks WITH (NOLOCK); — Add additional columns ALTER TABLE ##Activity ADD SQLStatement VARCHAR(MAX), SQLText VARCHAR(MAX), LoginName VARCHAR(200), HostName VARCHAR(50), Transaction_Isolation VARCHAR(100), DateTimeAdded DATETIME; DECLARE @Rowcount INT = 0; WHILE 1 = 1 BEGIN INSERT INTO ##Activity SELECT […]

Execution Plans

The execution plans SQL Server creates and uses is a huge topic with many intricacies. Now I have never spoken to anyone on the SQL Server Development team, however I would imagine that there are some extremely sharp people in the query optimization team. In a nutshell, this is where the magic happens in SQL Server. Overview An execution plan is a behind-the-scenes look at the road a query takes in order to deliver it’s final result. They are generated from the underlying data statistics combined with what the query and it’s parameters are trying to accomplish. When the initial query is read, the execution plan generation engine or “Query Optimizer” searches for the best way to deliver the results of the query in the quickest way possible. To do this, it needs to know what the underlying data looks like. This is facilitated by the statistics that are stored for each table, column, and index. With these statistics in hand, the goal is to eliminate the largest number of records as quickly as possible, and iterate through this process until the final result is delivered. That said, it is not an easy job. There are many variables that come into play when determining a query’s path. A few of these include the selection of indexes, join algorithms, join order, parallelism. Displaying the Plan As SQL developers, it is our job to understand how to read execution plans so we can see how the query is being executed. To view […] Continue reading ...

Understanding Batch Requests / sec

SQL Server’s Batch Requests represents the number of SQL Statements that are being executed per second. In my opinion this is the one metric that gives the best baseline on how your server is performing. It cannot be looked at by itself however. It is necessary to correlate with other metrics (especially CPU usage) in order to get an overall understanding of how your server is performing. The overall goal of any SQL Server is to be able to achieve the most batch requests/sec while utilizing the least amount of resources. On some busy machines I’ve worked with, this counter averaged around 25,000 batch requests/sec during peak time. This peak throughput is heavily dependent on the architectural design of the system, including procedures, tables, and index design. One notable example of this was a system I worked with whose procedures were written using loops (ugh).. The average batch requests we could ever achieve was only 200. I was very surprised to initially see this, however digging deeper into the code I became less and less shocked. Ultimately I found one piece of code that affected the throughput of the entire system. It was a scalar UDF defined as a computed column in the busiest table on the system (don’t get me started). Anyway, after rewriting that one process, the system then found batch request peaks that went over 3500! The fact that the system could achieve that number now was a big achievement. It meant the overall throughput of the […]

Understanding SQL Server Deadlocks

In order to fix deadlocks, it’s critical to understand why they occur. I’ve gotten some push back sometimes from readers when I write articles that do not give cookie cutter solutions, but the “teach a man to fish” adage couldn’t be more true for deadlocks. All of it revolves around locking, which is a central piece to SQL Server’s default transaction isolation level of read-committed. Now what is read committed? Some people are critical of this isolation level because with a different isolation level like Snapshot, it is much less likely that deadlocks occur because each transaction utilizes it’s own version of what a table looks like at the very time the transaction begins. There are trade-offs with shapshot however. Snapshot isolation is much more disk intensive because these “snapshots” need to be persisted in a temporary data store. So if you are ever considering enabling snapshot isolation as your default isolation level for SQL Server you will want to make sure you have a beefy RAID 10 tempdb storage. So with read-committed we have less tempdb data swapping, but more locking. There are many different kinds of locks and many different objects that can get locked. Each of these types of locks may or may not be compatible with other types of locks. Locks are acquired so that users have a consistent view of what the data looks like at the time they are querying it. Imagine user 1 begins updating a table to inactivate all the records while […] Continue reading ...

Featured Articles

 Site Author

  • Thanks for visiting!
css.php