Yet Another Temp Tables Vs Table Variables Article

The debate whether to use temp tables or table variables is an old debate that goes back since they were first introduced. The real answer to knowing the difference lies in what is going on under the hood and correlating those specifics to your situation. In this article we’ll touch on (hopefully all) the differences between the two.

The rest of this article will preface the word #temp tables by using the pound sign (#) and preface @table variables using the “at” (@) symbol

#Temp tables are just regular SQL tables that are defined and stored in TempDB. The only difference between them and a permanent table is they are not allowed to have foreign keys. You can view the temp tables currently defined via SSMS by going to TempDB and Expanding Temp Tables.

Let’s look at a matrix of specific differences then we’ll touch on a few of the key differences below.

Item #Temp Tables @Table Variables
Can participate in a transaction
Writes to Log File
Writes only to memory (not disk)
Can Qualify for Parallelism
Allows creation of statistics
Does not affect recompilation
Allows nonclustered indexes
Allows clustered indexes
Can perform SELECT INTO
Can access in nested stored procedures
Can define globally
Can use in user defined functions
Can insert from EXEC
Allows TRUNCATE
Allows ALTER TABLE

Scope

#Temp tables can be either #locally defined (within the scope of the session) or ##globally defined (within a database). When they are locally defined, the table can be used by whatever is executed within that session (or SPID). In other words, if you define a #temp table in procedure A, then procedure A calls procedure B, procedure B will also be able to access that same temp table. This of course needs to happen within the same SPID. Once the session is terminated, the #temp tables assigned to that session are automatically deallocated.

Globally defined ##temp tables are defined by using the pound sign twice ## then the temp table name. As the name indicates, they are available to Global ##temp tables are deallocated when the originating session terminates and all locks have been released.

@Table variables on the other hand, can only be defined locally yet their scope is limited to the procedure or batch they are called within. They cannot be referenced by other procedures from within the same session.

SELECT INTO.. Capability

This advantage the #temp table has over the @table variable is not often spoken about, but in my experience it’s a big one. Let me paint a picture of the first scenario. Let’s say you need to create a @table variable to accept the filtered results of one of your permanent tables, your Employees table. You end up creating 10 procedures and in each of them you explicitly define the columns to match the data types of the Employee table. Well, all is going fine, then one day HR says they need the firstname field to accept 50 characters instead of 25. Even worse, a new developer on your team doesn’t know that you have the firstname fields explicitly defined in @table variables and he goes ahead and changes firstname column to 50 characters in Employees. Now, all of your table variables are throwing errors because the long firstnames won’t fit. This is where the advantage of the #temp table comes in. Since you can capture your results using SELECT..INTO, the datatypes are determined at runtime.

[cc lang=”sql”]
— Example of SELECT..INTO
SELECT TOP 10
[SalesOrderID],
[SalesOrderDetailID]
into #sod
FROM [Sales].[SalesOrderDetail]
[/cc]

Performance

In my experience of taking an average of all queries that have used #temp tables vs @table variables, the temp tables have come out on top. The reason is that the query optimizer will sometimes generate poor plans for @table vars. This can mostly be seen when there is a lot of data. For this reason a general rule of thumb I’ve seen many places in the SQL community is. If there are less than 1000 rows, use a @table var, otherwise use a #temp table. While that may be a good rule of thumb, it certainly doesn’t apply to every situation. I have also seen some very strange scanning happen on @table variables. The biggest point I can make is that @table variables are more likely to cause unpredictable execution plans when compared to the plans generated for #temp tables. #Temp tables on the other hand, will cause more recompilation.

@Table Variables Do Not Write to Disk – Myth

There’s a common misconception that @table variables do not write to disk. I can dispel that myth with the following example. Below, we will watch the physical disk performance object using the counter “Disk Writes/sec”. By watching this object, we can be sure that the results of this graph represent actual writes to disk. This is not memory IO, this counter can only represent actual disk writes.

[cc lang=”sql”]
/********************************************
* TEST 1: @Table Variable insertion
*********************************************/
DECLARE @SalesOrderDetail TABLE (
[SalesOrderID] [int],
[SalesOrderDetailID] [int]
)

INSERT INTO @SalesOrderDetail
(
[SalesOrderID],
[SalesOrderDetailID]
)
SELECT TOP 10
[SalesOrderID],
[SalesOrderDetailID]
FROM [Sales].[SalesOrderDetail]

DELETE @SalesOrderDetail

GO 10000 — loop the above batch 10k times

/********************************************
* TEST 2: #temp table insertion
*********************************************/
SELECT TOP 10
[SalesOrderID],
[SalesOrderDetailID]
into #sod
FROM [Sales].[SalesOrderDetail]

DROP TABLE #sod

GO 10000 — loop the above batch 10k times
[/cc]

This example used the AdventuresWorks2012 database. The graph below shows the disk writes for both executions. Both executions took 10 seconds to execute, and they both wrote to disk the same amount. This also dispels the myth that @table vars only write to disk if the threshold goes over a certain number of records.

Itzik Ben Gan also gives an example of how @table vars write to disk in chapter 2 of his book TSQL-Programming.

Parallelism

The example below shows @table vars not qualifying for parallelism. This fact helps the argument that @table vars are better suited for small amounts of data. While parallelism is typically a symptom of a poorly written query, overall it provides a performance gain and definitely has good uses in large data transfers. This is another +1 for the #temp table.

[cc lang=”sql”]
/********************************************
* TEST 1: @Table Variable Parallelism
*********************************************/
DECLARE @SalesOrderDetail TABLE (
[SalesOrderID] [int],
[SalesOrderDetailID] [int]
)

INSERT INTO @SalesOrderDetail
(
[SalesOrderID],
[SalesOrderDetailID]
)
SELECT
sod.[SalesOrderID],
sod.[SalesOrderDetailID]
FROM [Sales].[SalesOrderDetail] sod
CROSS JOIN (select top 50 * FROM [Sales].[SalesOrderHeader]) t
OPTION (MAXDOP 5)

GO

/********************************************
* TEST 2: #Temp Table Parallelism
*********************************************/
create table #SalesOrderDetail (
[SalesOrderID] [int],
[SalesOrderDetailID] [int]
)

INSERT INTO #SalesOrderDetail
(
[SalesOrderID],
[SalesOrderDetailID]
)
SELECT
sod.[SalesOrderID],
sod.[SalesOrderDetailID]
FROM [Sales].[SalesOrderDetail] sod
CROSS JOIN (select top 50 * FROM [Sales].[SalesOrderHeader]) t
OPTION (MAXDOP 5)
[/cc]

And here’s the execution plan:

Recompilation

This is a definite advantage for the @table var. When coding for sub second throughput, you can definitely be hindered by recompilation that occurs because the dataset size that was used to initially compile the statement has now changed enough to warrant a recompile. There is a full article I wrote on this subject already that walks you through an example of the recompilation.

Summary

Hopefully this provides enough information to make a decision on what to use. Long story short, when you know you’re dealing with small result sets and you know your source data definitions are not going to change then use @table vars. Otherwise I would go with #temp tables.

13 comments
Michael-Rainabba Richardson 03 Oct 2014 at 8:24 pm

“They cannot be referenced by other procedures from within the same session.” perhaps not in the strictest sense, but table variables can be passed like any other so ProcA can all ProcB, passing the variable and then ProcB will have access.

Great article.

Ankush Jetly 11 May 2014 at 10:09 am

nice explanation!!!!!!

Ankur 18 Feb 2014 at 1:55 am

If temp variable Writes to Log File… why cant it be rolled back?

Diego García 06 Dec 2013 at 8:28 pm

Great stuff, thanks! This is pretty much what I was looking for.

Bijan 19 Apr 2013 at 9:40 am

Excellent article, clear and to the point!

Carie 16 Apr 2013 at 11:23 pm

Great article, thank you for taking the time to write it. For clarity, can you explain how you are defining a “small result set”? thanks.

Derek Dieter 12 May 2013 at 11:09 pm

Thanks! Small result set traditionally would be 1000 rows. But probably could be up to 10K rows..

Cary 18 Mar 2013 at 8:27 pm

Nice, clear, simple post. Well done.

Derek Dieter 18 Mar 2013 at 9:31 pm
Martin Smith 29 Dec 2012 at 11:52 am

And regarding the part about “. This also dispels the myth that @table vars only write to disk if the threshold goes over a certain number of records.” I did some testing on that here http://dba.stackexchange.com/q/16385/3690 and found that in that specific test 250 pages seems to be a cut off point before the data file gets written to. This was the same for #temp tables. I’m not claiming this is always the case but I’ve run that particular test on a couple of instances so far with the same result.

Derek Dieter 13 May 2013 at 8:33 pm

Great stuff Martin!

Martin Smith 29 Dec 2012 at 11:44 am

Table variables do write to the log file and they can have non clustered indexes if associated with a `NONCLUSTERED UNIQUE` or `NONCLUSTERED PRIMARY KEY` constraint

Majid Amin 21 Feb 2016 at 6:16 am

I think table variable don’t write in log file also we can’t rollback transaction table variable .

Featured Articles

 Site Author

  • Thanks for visiting!
css.php