I imagine you are just looking for simple syntax in order to reseed the identity column of a table you just deleted from. Here is the quick version: [cc lang=”sql”] DBCC CHECKIDENT(‘##reseed_example’, RESEED, @max_seed) [/cc] And here is an extended example: [cc lang=”sql”] — populate a table with identity SELECT ID = IDENTITY(int,1,1) ,name INTO ##reseed_example FROM dbo.sysobjects — delete some records DELETE FROM ##reseed_example WHERE ID > 5 — find the current max identity DECLARE @max_seed int = ISNULL((SELECT MAX(ID) FROM ##reseed_example),0) — use the current max as the seed DBCC CHECKIDENT(‘##reseed_example’, RESEED, @max_seed) — let’s test INSERT INTO ##reseed_example ( name ) SELECT ‘newobject’ — done SELECT * FROM ##reseed_example [/cc] it should be noted that in order to use DBCC CHECKIDENT you need to be dbo (db_owner). This does present an issue sometimes because often the reason the user is performing a delete instead of a truncate is because they do not have dbo rights. To overcome this, you can will need to create a procedure that that uses: WITH EXECUTE AS ‘dbo’. And reseed from there.
Continue reading ...
My first introduction to the APPLY operator was using the DMVs. For quite a while after first being introduced, I didn’t understand it or see a use for it. While it is undeniable that it is has some required uses when dealing with table valued functions, it’s other uses evaded me for a while. Luckily, I started seeing some code that used it outside of table valued functions. It finally struck me that it could be used as a replacement for correlated sub queries and derived tables. That’s what we’ll discuss today. I never liked correlated subqueries because it always seemed like adding full blown queries in the select list was confusing and improper. [cc lang=”sql”] SELECT SalesOrderID = soh.SalesOrderID ,OrderDate = soh.OrderDate ,MaxUnitPrice = (SELECT MAX(sod.UnitPrice) FROM Sales.SalesOrderDetail sod WHERE soh.SalesOrderID = sod.SalesOrderID) FROM AdventureWorks.Sales.SalesOrderHeader AS soh [/cc] It always seemed to me that these operations should go below the FROM clause. So to get around this, I would typically create a derived table. Which didn’t completely feel right either, but it was still just a bit cleaner: [cc lang=”sql”] SELECT soh.SalesOrderID ,soh.OrderDate ,sod.max_unit_price FROM AdventureWorks.Sales.SalesOrderHeader AS soh JOIN ( SELECT max_unit_price = MAX(sod.UnitPrice), SalesOrderID FROM Sales.SalesOrderDetail AS sod GROUP BY sod.SalesOrderID ) sod ON sod.SalesOrderID = soh.SalesOrderID [/cc] What made this ugly was the need to use the GROUP BY clause because we could not correlate. Also, even though SQL almost always generates the same execution plan as a correlated sub query, there were times when the logic […]
To explain what an indexed view is, let’s first look at what constitutes a view. A view may sound like a fancy elaborate thing, however all it is, is a saved SELECT statement, nothing else. It is not explicitly compiled, nor does it contain any data. When you select from a view, it goes to the underlying tables and retrieves the data at the time it is called. An indexed view on the other hand, is a normal view that takes a copy of the underlying data it points to, and stores it locally. This way, the joins and aggregations that are processed at run-time in a normal view, are already materialized, so when queried, it’s as fast as querying a normal table. Therefore, another name for indexed view is “materialized view”. This is what it’s called in Oracle. Creating To make a normal view an indexed view, you need to do two things. First you need to enable SCHEMABINDING for the view. Schemabinding essentially locks the underlying DDL schemas for the tables that the view references. This prevents any DDL changes from being made to the referenced tables. If you want to make a change to the tables, you need to drop the view first. Let’s create this new view using the AdventureWorks Database: [cc lang=”sql”] CREATE VIEW Sales.OrderTotals WITH SCHEMABINDING AS SELECT SalesOrderID = soh.SalesOrderID, OrderTotal = SUM(sod.UnitPrice), OrderCount = COUNT_BIG(*) FROM Sales.SalesOrderHeader soh JOIN Sales.SalesOrderDetail sod ON sod.SalesOrderID = soh.SalesOrderID GROUP BY soh.SalesOrderID [/cc] Okay, we’re half […]
Continue reading ...
The ability to create and update statistics is a crucial liberty for the query optimizer. The optimizer needs to know how it is going to query underlying tables, and when the statistics do not match the data, it is very probable that it will choose a non-efficient method for querying. I’ve seen and heard a lot of debate from DBA’s who think they should turn this off. Of all those DBA’s I think there is one who was correct, and he ended up convincing me of his scenario so I’m not as closed minded as I was before. His server was so bottle-necked during peak time, that he updated the stats off hours. I’m very weary of those who blindly say “it’s better to turn it off” without any factual statistics to back them up. Most of the DBA’s that think they should turn it off are stuck in the SQL Server 7.0 days. At that time, turning this feature off was acceptable because they interfered with the currently running queries. Example Case in point, we had a DBA who did turn this feature off in an environment that mirrored production. I overheard the developers complaining how their queries took twice as long on this box. Looking at perfmon, I saw no physical bottlenecks. Since this was the case, I turned to statistics and found both auto update and auto create turned off. After turning it back on, the box was just as fast as production. Long story short, these […]
When DBCC UpdateUsage is run, page and row counts are updated. When this information is incorrect, it can give incorrect reports for how much space is used by tables and indexes. The most likely cause for incorrect usage stats is when a database is upgraded to a new compatibility level of SQL Server (i.e. 2000 to 2005). Therefore it’s important to run this command after the statistics have been updated. It may not be a bad idea to run on a weekly or monthly schedule if your database undergoes frequent DDL modifications. To execute you can run one of the following commands: [cc lang=”sql”] Use MyDB; GO DBCC UPDATEUSAGE(0); — Execute for the current database GO DBCC UPDATEUSAGE(MyDB); — Execute using the database name GO DBCC UPDATEUSAGE(MyDB); — Execute using the database ID [/cc] This may take some time, so make sure to run it in a non production environment or during off hours as it may cause blocking. You can also run for a specific table or index. If you run for a table, then the usage stats are updated for all indexes on that table. [cc lang=”sql”] — Update for a table (and it’s indexes) DBCC UPDATEUSAGE(AdventureWorks, ‘Sales.SalesOrderDetail’); — Update usage for a single index DBCC UPDATEUSAGE(AdventureWorks, ‘Sales.SalesOrderDetail’, ‘IX_SalesOrderDetail_ProductID’); [/cc] That’s basically it. Now you may be running these commands and are not seeing any output. Well, that’s because the usage stats are already correct. But if you want to spoof the stats in order to see the […]
Continue reading ...