Temporary Tables in SQL Server

Temporary tables are used by every DB developer, but they're not likely to be too adventurous with their use, or exploit all their advantages. They can improve your code's performance and maintainability, but can be the source of grief to both developer and DBA if things go wrong and a process grinds away inexorably slowly. We asked Phil for advice, thinking that it would be a simple explanation.

Temporary tables are just that. They are used most often to provide workspace for the intermediate results when processing data within a batch or procedure. They are also used to pass a table from a table-valued function, to pass table-based data between stored procedures or, more recently in the form of Table-valued parameters, to send whole read-only tables from applications  to SQL Server routines, or pass read-only  temporary tables as parameters. Once finished with their use, they are discarded automatically.

Temporary tables come in different flavours including, amongst others, local temporary tables (starting with #), global temporary tables (starting with ##), persistent temporary tables (prefixed by TempDB..), and table variables.(starting with (@)

Before we get too deep into the technology, I’d advise that you should use table variables where possible. They’re easy, and SQL Server does the work for you. They also tend to cause fewer problems to a hard-working OLTP system. Just occasionally, you may need to fine-tune them to get good performance from them in joins, but I’ll explain that in a moment, However, if you are doing more complex processing on temporary data or likely to use more than reasonably small amounts of data in them, then  local temporary tables are likely to be  a better choice.

Table Variables

Table variables are used within the scope of the routine or batch within which they are defined, and were originally created to make table-valued functions possible. However, they are good for many of the uses that the traditional temporary table was put to. They behave like other variables in their scoping rules. Once out of scope, they are disposed of. These are much easier to work with, and pretty secure, and they trigger fewer recompiles in the routines where they’re used than if you were to use temporary tables. Table variables require less locking resources as they are ‘private’ to the process that created them. Transaction rollbacks do not affect them because table variables have limited scope and are not part of the persistent database, so they are handy for creating or storing data that ought to survive roll backs such as log entries. The downside of table variables is that they are often disposed of before you can investigate their contents for debugging, or use them to try out different SQL expressions interactively.

If your application is conservative and your data volumes light you’ll never want anything else. However, you can hit problems. One difficulty is that table variables can only be referenced in their local scope, so you cannot process them using dynamic SQL as you might with a temporary table or table-valued parameter. This is because you can’t refer an externally-defined table variable within dynamic SQL that you then execute via the EXEC statement or the sp_ExecuteSQL stored procedure because the dynamic SQL is executed outside the scope of the table variable. You can, of course, create, and then use, the table variable inside the dynamic SQL because  the table variable would be in scope. However, once the dynamic SQL is run, there would be no table variable

There are a few anomalies to be aware of too. You can’t, for example, change the table definition after the initial DECLARE statement. In SQL Server 2000,  a table variable can’t be the destination of a SELECT INTO statement or a INSERT EXEC (now fixed); You can’t call user-defined functions from CHECK constraints, DEFAULT values, and computed columns in the table variable. The only constraints that you’re allowed beyond CHECK constraints are PRIMARY KEY, UNIQUE KEY, and NULL / NOT NULL

The trickiest problems, though, come with increasing size of the tables, because you can’t declare an index explicitly and distribution statistics aren’t maintained on them. The Query Optimiser assumes that there is only one row in the table. You also cannot generate parallel query plans for a SQL expression that is modifying the table’s contents. To partially get around the index restriction, you can use constraints to do the same thing. Most essential is the Primary Key constraint which allows you to impose a clustered index, but unique constraints are useful for performance. The Query optimiser will happily use them if they are around. The biggest problem with table variables is that statistics aren’t maintained on the columns. This means that the query optimiser has to make a guess as to the size and distribution of the data and if it gets it wrong, then you’re going to see poor performance on joins: If this happens, there is little you can do other than to revert to using classic local temporary tables. One thing you can try is to add option (recompile) to the statement that involves the table variable joining with other tables. .By doing this, SQL Server will be able to detect number of rows at recompile because the rows will have already been populated. This doesn’t entirely solve the problem since the optimiser still has no distribution statistics and can, usually where the distribution is skewed, produce a bad plan. In this demo, the join was reduced in time by three quarters simply by adding the OPTION (RECOMPILE)

Now if you can make what goes into the tables unique, you can then use a primary key constraint on these tables. This allowed the optimiser to use a clustered index seek instead of a table scan and the execution time was too rapid to measure

Start with table variables, but drop back to using local temporary tables if you hit performance problems. Some people are bold enough to give advice in terms of the number of rows in a table, and I’ve seen 100 or 1000 offered as a maximum; but  I’ve seen far larger table variables perform perfectly satisfactorily over time, and far smaller ones give trouble. However, in smaller tables, the trouble is less detectable!

Table-Valued Parameters

The Table-Valued Parameter (TVP) is a special type of  table variable that extends its use. When table variables are passed as parameters, the table is materialized in the TempDB system database as a table variable and passed by reference,  a pointer to the table in the TempDB.

Table-valued parameters have been  used since SQL Server 2008 to send several rows of data to a Transact-SQL routine or to a batch via sp_ExecuteSQL.. Their particular value to the programmer is that they can be used  within TSQL code as well as in the client application, so they are good for sending client tables to the server.  From TSQL, you can declare table-valued variables, insert data into them, and  pass these variables as table-valued parameters to stored procedures and functions.Their more general usefulness is limited by the fact that they are only passed as  read-only. You can’t do UPDATE, DELETE, or INSERT statements on a table-valued parameter in the body of a routine.

You need to create a User-Defined Table Type and define a table structure to use them. Here is a simple example of their use in TSQL

As with Table Variables, the table-valued parameter ceases to exist once it is out of scope but the type definition remains until it is explicitly  dropped. Like Table Variables they do not acquire locks when the data is being populated from a client, and  statistics aren’t maintained  on columns of table-valued parameters. You cannot use a table-valued parameter as target of a SELECT INTO or INSERT EXEC statement. As you’d expect, a table-valued parameter can be in the FROM clause of SELECT INTO or in the INSERT EXEC string or stored-procedure.

The TVP solves the common problem of wanting to pass a local variable to dynamic SQL that is then executed  by a sp_ExecuteSQL. It is poorly documented by Microsoft, so I’ll show you a worked example to get you started

Before we move on to describe the more traditional  temporary tables and their use, we’ll need to delve into the place where temporary tables are held. TempDB.


Temporary tables and table variables are created in the TempDB database, which is really just another database with simple recovery: With TempDB, only sufficient ‘minimal’  logging is done to allow rollback, and other ACID niceties. The special difference of TempDB is that any objects such as tables are cleared out on startup. Because TempDB always uses the simple recovery model, the completed transaction are cleared from the log log on the next TempDB checkpoint, and only the live transactions are retained. This all means that temporary tables behave like any other sort of base table in that they are logged, and stored just like them. In practice, temporary tables are likely to remain cached in memory, but only if they are frequently-used: same as with a base table. TempDB operates a system called temporary object reuse, which will cache a portion of the temporary objects with the plan, if there is sufficient memory. This may account for the legend that temporary objects exist only in memory. The truth as ever is ‘it depends…’.

A lot of other things go on in TempDB: The database engine can use it  for placing work tables for DBCC checks, for creating or rebuilding  indexes, cursors,  for example. Intermediate tables in queries described as ‘hashes’, ‘sorts’ and ‘spools’ are materialized in TempDB, for example, along with those required for several ‘physical’ operations in executing SQL Statements. It is also used as a version store for Snapshot isolation, Multiple Active Results Sets (MARS), triggers and online-index-build.

Because temporary tables are stored just like base tables, there are one or two things you need to be wary of. You must, for example, have CREATE TABLE permission in TempDB in order to create a normal table. To save you the trouble, this is assigned by default to the DBO (db owner) role, but you may need to do it explicitly for users who aren’t assigned the DBO role. All users have permissions to create local or global temporary tables in TempDB because this is assigned to them via the GUEST  user security context.

The classic temporary table comes in two flavors, the Global, or shareable, temporary table, prefixed by ‘##’, and the local temporary table, whose name is prefixed with ‘#’.The local temporary tables are less like normal tables than the Global temporary tables: You cannot create views on them, or associate triggers with them. It is a bit tricky to work out which process, session or procedure created them. We’ll give you a bit of help with that later. Most importantly, they are more secure than a global temporary table as only the owning process can see it.

Another oddity of the local temporary table (and the local temporary stored procedure)  is that it has a different name in the metadata to the one you give it in your routine or batch. If the same routine is executed simultaneously by several processes, the Database Engine needs to be able to distinguish between the identically-named local temporary tables created by the different processes. It does this by adding a numeric string to each local temporary table name left-padded by underscore characters. Although you specify the short name such as #MyTempTable, what is actually stored in TempDB is made up of the table name specified in the CREATE TABLE statement and the suffix. Because of this suffix, local temporary table names must be 116 characters or less.

If you’re interested in seeing what is going on, you can view the tables in TempDB just the same way you would any other table. You can even use sp_help work on temporary tables only if you invoke them from TempDB.

or you can find them in the system views of TempDB without swithching databases.

Or the Information Schema

Even better, you can find out what process, and user, is holding on to enormous temporary tables in TempDB and refusing to give up the space

You cannot use user-defined datatypes in temporary tables unless the datatypes exist in TempDB; that is, unless the datatypes have been explicitly created

User Tables in TempDB

In normal use, you will create temporary tables, or table variables without thinking too deeply about it. However, it is interesting, though, that TempDB is there for any sort of sandbox activity. You can create ordinary base tables, views, or anything else you want. You can create schemas, stored procedures and so on. You’re unlikely to want to do this, but it is certainly possible since TempDB is just another database. I’ve just had to restart my development SQL Server after proving this to myself by installing AdventureWorks onto it. This means that it is possible to create a base table in TempDB, a sort of ..er… temporary permanent table. Unlike the global temporary table, you’d have to do all your own housekeeping on it: you’re on your own. The same is true of routines. The advantage of doing this is that any processing that you do uses TempDB’s simple recovery so that, if you fail to mop up, SQL Server acts as mother on the next startup: though this could be a very long time. The next stage is to have what I call a ‘persistent temporary’ table. In this table, the data itself is volatile when the server restarts, but the table itself persists. Probably the most common way to create a persistent Temporary table is to recreate on startup a global temporary table. This can be done automatically when all databases are recovered and the “Recovery is completed” message is logged. Even though this is a ‘global temporary’, it isn’t deleted when all connections using it have disappeared, because the process that runs it never disappears. Arguably, it is better to create this kind of  work table in the database that uses it, though, if you are using full recovery, the temporary work will remain in the log. You can, of course, just create an ordinary table in TempDB. You can create these ‘persistent’ tables on startup by defining a stored procedure in master that creates the global temporary table

Why use this sort of hybrid table? There are, for example, a number of techniques for passing tables between procedures via ‘persistent’ tables in a multiprocess-safe way, so as to do a series of processing to the data. These are referred to a Process-keyed tables (see ‘How to Share Data Between Stored Procedures: Process-Keyed table by  Erland Sommarskog). They will initially raise the eyebrows of any seasoned DBA but they are an effective and safe solution to a perennial problem, when they are done properly.

As well as temporary tables, there are also a number of  table types that aren’t directly derived from base tables, such as ‘fake’ tables and derived tables: some of these are so fleeting that they are best thought of as ephemeral rather than temporary. The CTE uses ephemeral tables that are ‘inline’ or ‘derived’ and aren’t materialised. BOL refers to them as ‘temporary named result sets’. They exist only within the scope of the expression. In a CTE, they have the advantage over derived tables in that they can be accessed more than once.

Local Temporary Table

With Local temporary table (names that begin with #), what goes on under the hood is surprisingly similar to table variables. As with Table Variables, Local Temporary tables are private to the process that created it. They cannot therefore be used in views and you cannot associate triggers with them.

They are handier than table variables if you like using SELECT INTO to create them, but I’m slightly wary about using SELECT INTO in a system that is likely to require modification, I’d much rather create my temporary tables explicitly, along with all the constraints that are needed.

 You cannot easily tell which session or procedure has created these tables. This is because, if the same stored procedure is executed simultaneously by several processes, the Database Engine needs to be able to distinguish the same tables created by the different processes. The Database Engine does this by internally appending a left-padded numeric suffix to each local temporary table name. The full name of a temporary table as stored in the sys.objects view in TempDB is made up of the table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for the suffix, the table name specified for a local temporary name must be less than 116 characters.

You get housekeeping with Local Temporary tables; they are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE. Their scope is more generous than a table Variable so you don’t have problems referencing them within batches or in dynamic SQL. Local temporary tables are dropped automatically at the end of the current session or procedure. Dropping it at the end of the procedure that created it can cause head-scratching: a local temporary table that is created within a stored procedure or session is dropped when it is finished so it cannot be referenced by the process that called the stored procedure that created the table. It can, however, be referenced by any nested stored procedures executed by the stored procedure that created the table. If the nested procedure references a temporary table and two temporary tables with the same name exist at that time, which table is the query is resolved against?

As a curiosity, you can also create Local Temporary Stored Procedures with the same scope and lifetime as a local temporary table. You can’t do the same for other routines. 

Global Temporary Tables.

Like Local temporary tables, Global temporary tables (they begin with ##) are automatically dropped when the session that created the table ends: However, because global tables aren’t private to the process that created it, they must persist thereafter until the last Transact-SQL statement that was actively referencing the table at the time when the creating session ended has finished executing and the locks are dropped. Anyone who has access to TempDB at the time these Global Temporary tables exist can directly query, modify or drop these temporary objects.

You can associate rules, defaults, and indexes with temporary tables, but you cannot create views on temporary tables or associate triggers with them. You can use a user-defined datatype when creating a temporary table only if the datatype exists in TempDB

Stored procedures can reference temporary tables that are created during the current session. Within a stored procedure, you cannot create a temporary table, drop it, and then create a new temporary table with the same name.

Although this works….

…this doesn’t

You can, by using local temporary tables, unintentionally  force recompilation on the stored procedure every time it is used. This isn’t good because the stored procedure is unlikely to perform well. To avoid recompilation, avoid referring to a temporary table created in a calling or called stored procedure: If you can’t do so, then put the reference in a string that is then executed using the EXECUTE statement or sp_ExecuteSQL stored procedure. Also, make sure that the temporary table is created in the stored procedure or trigger before it is referenced and dropped after these references. Don’t create a temporary table within a control-of-flow statement such as IF... ELSE or WHILE.

You are allowed to create Global Temporary Stored Procedures, but I’ve yet to find a use for them. Global temporary functions aren’t permitted.


In any shared playground, be very careful how you swing that bat. You’ll have realized, whilst reading this, that a lot of activity goes on in TempDB, and you can cause havoc to the whole SQL Server by using long-running processes that fill temporary tables, whatever type they are, with unnecessary quantities of data. In fact, I’ve given you clues in this article how to really, really, upset your DBA by inconsiderate use of that precious shared resource, the TempDB. (In the old days before S2005, using SELECT INTO with a huge table was the great V-weapon (Vergeltungswaffe)

I’m always wary of providing over-generalized advice, but I always prefer my databases to use Table Variables, and TVPs wherever possible,  They require less resource, and you’re less likely to hold onto them when you’re finished with them.  I like to use them to the max, with column and table checks and constraints. You may find times when they run out of steam, especially when table sizes get larger. In cases like this, or where it isn’t practical to use table variables because of their restricted scope, then I’ll use local temporary tables. It takes a lot of pursed lips and shaking of heads before I’ll agree to a global temporary table or persistent temporary table. They have a few valid and perfectly reasonable uses, but they place reliance on the programmer to do the necessary housekeeping

Always bear in mind that misuse of temporary tables, such as unnecessarily large, or too long-lived,  can have effects on other processes, even on other databases on the server. You are, after all, using a shared resource, and you wouldn’t treat your bathroom  that way would you?

Tags: , , , ,


  • Rate
    [Total: 13    Average: 4.4/5]
  • GrumpyOldDBA

    nice article
    Table variables can be a real performance bottleneck though when used in joins. I have a blog post coming up – probably next week illustrating exactly why a #temp table replacing a table variable can work wonders! The execution time for the query reduced from 10 secs to 1.3 secs after this change, and it was a simple query too!

  • Phil Factor

    Re: performance of table variables.
    True, the optimizer assumes a cardinality of one for the number of rows, so the query plan can be all over the place. In this article, I explain some tricks for speeding them up. I agree that a recompile hint may not produce a lightening-fast performance due to the overhead of the recompile itself, but it can help greatly. My own experience is that a judicious UNIQUE constraint on the column used to do the join can work wonders, and seems to work in conjunction with the recompile hint.

  • Paul McMillan

    Be very careful when using Table Value Parameters and @Tables
    They MAY be OK for small volumes but because there are no statistics on them your execution plan can become a bit dicey.. We replaced all our @tables with physical tables or #tables because we found the query would ‘hang’ because the query plan engine thought there were zero rows in the table

  • Paul McMillan

    You can refer to temp tables in DYNAMIC SQL
    The following code will insert a row into #Temp..

    IF OBJECT_ID(‘tempdb.dbo.#Temp’) IS NOT NULL

    DROP TABLE #Temp

    ID INT

    SELECT * FROM #Temp

    DECLARE @Message NVARCHAR(100)

    SET @Message = ‘INSERT INTO #Temp VALUES (1)’

    EXEC (@Message)

    SELECT * FROM #Temp

  • Phil Factor

    Re: You can refer to temp tables in DYNAMIC SQL
    Indeed you can.
    In the article, I mention a different problem that can hit the unwary who are maybe doing recursive routines, and I should perhaps have given an example.

    ID INT

    ID INT
    Exec (”INSERT INTO #Temp VALUES (1)”)’)
    SELECT * FROM #Temp

    or even

    ID INT

    ID INT
    INSERT INTO #Temp VALUES (1)’)
    SELECT * FROM #Temp

    The answer is, of course, that any reference to a table, when two or more versions of the temporary table exists, refers to the innermost one.

  • vikrant.a.more

    Perfect and nice artical
    Thanks! Phil Factor a very knowledgeable.
    I have 1 Question
    Which is faster in execution and to Improve the Performance of a Query/API.
    1] Local Temporary Table (#),
    2] Global Temporary Table (##),
    3] Table Variable (@) or
    4] CTE ?

    • Phil Factor

      Re: Perfect and nice artical
      I usually do the development work on a local table (#) and get a set of timings using a good hunk of data. I then change to table variables (@), being careful to put in constraints on the rows I use to join.
      I then run the tests again to make sure performance is the same. It usually is. If I hit a performance problem, I’ll try option (recompile), on the assumption that it can’t generate a good query plan. If I still can’t get equivalent performance then I’ll go back to using a local temporary table (#).  Global tables have a special use, as do CTEs. Generally, I only use CTEs where the routine is inherently recursive or maybe when there are special needs to do more than one query on derived table(s). They tend to be overused, probably because they’re quicker to type in. You may be lucky and get a decent execution plan but then Database Devs don’t like to rely on luck.

  • Robert Matthew Cook

    What an amazingly great temporary tables post. Thank you sir!

  • Paul Hickey

    Avoid Named Constratins in Local Temp Tables
    If your multi-user app creates temp tables, make sure those table definitions don’t include named constraints, such as column defaults. While each process’s table has a unique name, the named constraints don’t, with the result that process B can’t create its temp table until process A has dropped its temp table, EVEN IF PROCESS A AND B ARE USING DIFFERENT DATABASES (on the same server).

  • vikrant.a.more

    Re: Perfect and nice artical
    Thank You Very Much!! Phil Factor 🙂

  • GrumpyOldDBA

    my blog post on table vars
    OK Phil -finally put the post up which explains my earlier comments.

  • byo

    Brilliant article!
    That is one huge informative article, sir. I can’t thank you enough for sharing so much information with us as you do, mr. Phil.

    One note only: in your ChooseAName procedure, you don’t need that intermediate @Candidates table variable. You can get what you want from your table-valued parameter:

    SELECT TOP(1) Name FROM @CandidateNames
    ORDER BY NewId();

    This is an article to be bookmarked for generations to come. Thank you. 🙂

    Best regards,

    Andre Guerreiro Neto (codebyo)
    MCITP/MCTS SQL Server 2008

  • meklembl2

    Excellent. Temp tables in 2005?
    Great article. And with any great article, the comments are excellent.

    Can you help with “Find out who created the temporary table,and when; the culprit and SPId” using a SQL2005 database?

    Larry Meklemburg

  • Phil Factor

    Re:. Temp tables in 2005?
    I tested this on 2005 and 2008, and I thought it worked fine. Have you hit a problem? It won’t work in 2000 (I should have pointed that out)

  • TheSQLGuru

    Absolutely Incorrect Advice
    Sorry Phil, but your advice to use table vars over real temp tables is completely contrary to proper guidance. There are optimization limitations (especially the lack of statistics) that very frequently lead to horrible query plans. A SINGLE ROW table variable can get you a suboptimal plan compared to the same query using a temp table, so the very common “use table vars if you have less than NNN rows” advice is flawed. I also don’t understand your statement about resource differences between the two.

    There are 2 reasons where table vars are appropriate and they are VERY infrequently found in the wild: 1) extremely high-call-frequency code where recompilations from temp table activity are a problem a 2) audit scenarios where you need to keep information after a transaction rollback has occurred.

    I will also take exception to the advice to use constraints, checks, indexes, etc. In 15+ years of relational engine consulting I have eliminated those things (especially indexes) from temp tables to improve performance at least an order of magnitude more frequently than I have added them.

    • Phil Factor

      Re: Absolutely Incorrect Advice

      As I say in the conclusion …

      ‘You may find times when they run out of steam, especially when table sizes get larger. In cases like this, or where it isn’t practical to use table variables because of their restricted scope, then I’ll use local temporary tables’

      I think that where we may disagree is that I advise that one should try table variables first, whereas you advise that it is so rare that they are useful that you should only use them only in exceptional purposes.

  • linc

    Can anyone please explain below? Some examples will be greatly appreciated.

    “To avoid recompilation, avoid referring to a temporary table created in a calling or called stored procedure: If you can’t do so, then put the reference in a string that is then executed using the EXECUTE statement or sp_ExecuteSQL stored procedure. Also, make sure that the temporary table is created in the stored procedure or trigger before it is referenced and dropped after these references.”

  • GilaMonster

    Table variable performance
    It’s not just when the table variables get large. It’s any time that you’re depending on accurate costing to get a good execution plan. Basically any time you’re doing something more complex than

    SELECT * FROM @TableVariable

    Especially when joining to other tables.

    Also, while recompile will get a non-zero estimate, the optimiser still has no distribution statistics. All it knows is the total number of rows in the table variable (from the storage engine’s metadata). Unless the data is perfectly evenly distributed, the optimiser is still very likely to get the estimations wrong.

    I fear this article is just going to spawn more performance tuning work for me and other consultants and more performance-related questions on various forums.

  • Phil Factor

    Re: Question
    The best account that I know of is given here.

  • Peso

    As Gail is writing, table variables doesn’t support parallelis, except for a SELECT query.
    INSERT, DELETE and UPDATE for a table variable can’t use parallelism.

    Also, you can very well write

    INSERT @Sample


    INSERT @Sample
    EXEC dbo.uspMyFirstProcedure

  • GilaMonster

    Insert Exec with a table variable
    @Peso iirc that wasn’t allowed in SQL 2000. Another of those things that once we true but are no longer (but are still stated)

  • Phil Factor

    Re: Parallelism
    Thanks for those two corrections. If that’s Okay with you I’ll update the article. Gail is right in that it was a problem with SQL Server 2000, but was fixed in 2005 onwards. I’ve just tested it.

  • WayneSheffield

    Two same-named temp tables
    You stated “The answer is, of course, that any reference to a table, when two or more versions of the temporary table exists, refers to the innermost one.”

    At BOL (http://msdn.microsoft.com/en-us/library/ms174979.aspx) (about 1/2 way down, under the “Temporary Tables” topic), it states:
    “A local temporary table created within a stored procedure or trigger can have the same name as a temporary table that was created before the stored procedure or trigger is called. However, if a query references a temporary table and two temporary tables with the same name exist at that time, it is not defined which table the query is resolved against.”

    So, in practice you may have always seen it being applied to the innermost temp table, but it’s not guaranteed to be that way and you should not make that assumption.

  • TheSQLGuru

    2 Quick Reasons to Avoid Table Vars
    Several twitter users have asked for reasons to avoid table vars. Here are 2 quickies:

    a) void the use of parallelism in modification queries. Reference: http://msdn.microsoft.com/en-us/library/ms175010(SQL.90).aspx

    b) as I said earlier lack of stats can cause HORRIBLE query plans – and it isn’t just based on total number of rows in query. Here is a reference from CSS (http://blogs.msdn.com/b/psssql/archive/2010/08/24/query-performance-and-table-variables.aspx) about lots-of-rows issues and here is a simple example of a single row plan:

    –proof that table variable can lead to bad plan with single row
    create table bigtest (a int not null, b char(500) not null)

    insert bigtest
    select 1, replicate(‘ ‘, 500)
    from kgbtools.dbo.bignumbers –100000 rows here

    insert bigtest
    select number, replicate(‘ ‘, 500)
    from kgbtools.dbo.bignumbers –9 sequential rows here
    where number between 2 and 10

    create index idx on bigtest (a)

    dbcc freeproccache
    set statistics IO on
    –show actual query execution plan
    –be sure to examine actual and estimated rows at various points in the plans
    declare @a table (a int not null)
    insert @a values (2)

    select *
    from bigtest b inner join @a a on b.a = a.a
    –table scan hash join plan, 6668 IO BAD PLAN!!!

    create table #a (a int not null)
    insert #a values(2)

    select *
    from bigtest b inner join #a a on b.a = a.a
    –index seek nested loop join plan, 3 IO GOOD PLAN!!!

  • Phil Factor

    Re: Two same-named temp tables
    Thanks for that. Yes, this phrase has always been in the CREATE TABLE Documentation of BOL, which is why I mentioned the issue in the article. It’s always intrigued me. In the article, I left it as a question, though I rashly gave my opinion in the comment as to which it should be! It is a bug, if it can’t determine the innermost reference, and I rather assumed that they’d finally fixed it, especially as recursion is now more commonly used. Under what circumstances does it still go wrong?

  • Phil Factor

    Re: 2 Quick Reasons to Avoid Table Vars

    I agree with you, and said in this article, that table variables don’t allow parallelism. I also agree that you, and said in the article, that one can get dreadful plans. I’m interested in what you’re saying, particularly as it is pretty radical, and happy to be corrected if you are really right in saying that one should almost never ever use Table variables, and get the article amended accordingly. However, I’m not really sure that your example represents a real-life use of table variables

    I suggest therefore that we take this discussion off these comment pages as I don’t want to run the risk of  bewildering or boring many readers of this article, and it really isn’t possible to paste code in to comments to illustrate a point, and instead move to ASK.SQLSERVERCentral.COM where I’ve asked a question Are Table Variables a Bad Thing to continue the discussion

  • Bryan Effect

    The article is really good, however I am newish to t-sql. After reading I wanted to learn and analyze the code better. Some of the code that you used to write the while loop I am unfamiliar with… Do you happen to know of a good website/article to explain the while loop better. I have searched the internet but I can only find very basic/entry level code for while loops.

    SET nocount ON
    DECLARE @FirstTable TABLE (RandomInteger INT)
    DECLARE @SecondTable TABLE (RandomInteger INT)

    SET @ii = 0

    WHILE @ii < 100000
    INSERT INTO @FirstTable
    VALUES (RAND() * 10000)
    SET @ii = @ii + 1
    SET @ii = 0
    WHILE @ii < 100000
    INSERT INTO @SecondTable
    VALUES (RAND() * 10000)
    SET @ii = @ii + 1
    SELECT @WhenWeStarted = GETDATE()
    FROM @FirstTable first
    INNER JOIN @SecondTable second
    ON first.RandomInteger = second.RandomInteger OPTION (RECOMPILE)
    — 153Ms as opposed to 653Ms without the hint
    SELECT ‘That took ‘
    + CONVERT(VARCHAR(8), DATEDIFF(ms, @WhenWeStarted, GETDATE()))
    + ‘ ms’

  • Phil Factor

    question about While loops
    Please ask this question on http://www.ASK.SQLServerCentral.com and there will be plenty of people who will help

  • Navodita

    Question Regarding explicitly dropping temp tables
    Though the temp tables get automatically dropped at the end of the session.
    But, if the temp tables are getting dropped explicitly in the stored procedure, then which approach is recommended and why?

    1: CREATE PROCEDURE dbo.Demo
    CREATE TABLE #Temp (col1 int NULL);
    SELECT ObjectID = OBJECT_ID(N’tempdb..#Temp11′);

    DROP TABLE #Temp;


    2: CREATE PROCEDURE dbo.Demo

    IF OBJECT_ID(‘tempdb..#Temp’) IS NOT NULL
    DROP TABLE #Temp;

    CREATE TABLE #Temp (col1 int NULL);

    SELECT ObjectID = OBJECT_ID(N’tempdb..#Temp’);

  • kevindockerty

    I have rarely found a lot of difference in performance between temporary tables and table variables. I always prefer temporary tables when dealing with larger tables as I am always consious of memory requirements of table variables. That said, SQL Server will always try and use memory over disk.

    Interestingl,y I have never had a noticable performance gain by adding indexes to temporary tables. Any feedback on this appreciated.

  • kevindockerty

    I have rarely found a lot of difference in performance between temporary tables and table variables. I always prefer temporary tables when dealing with larger tables as I am always consious of memory requirements of table variables. That said, SQL Server will always try and use memory over disk.

    Interestingl,y I have never had a noticable performance gain by adding indexes to temporary tables. Any feedback on this appreciated.

  • Alexander Shelemin

    kevindockerty, if you have a rather big temp table that you later join to some other big (not necessarily temp) table, having index on a join key in the temp table may well give you a big performance gain, just like an index on a regular table in similar situation.