Click here to monitor SSC
  • Av rating:
  • Total votes: 27
  • Total comments: 3
Dave Green

Ten Things I Wish I’d Known When I Started Using tSQLt and SQL Test

11 September 2012

The open-source Unit Test framework tSQLt is a great way of writing unit tests in the same language as the one being tested. In retrospect,  after using  tSQLt for a while, what are the 'gotchas'; those things that you'd have been better off knowing about before you get started? David Green lists a few tips he wished he'd read beforehand.

It’s been about a year now since I started using the tSQLt framework, and the Red Gate Add-in which makes it integrated into SQL Server Management Studio, SQL Test. With that in mind, it seemed an opportune moment to share what I’ve learnt, both in terms of writing tSQLt unit tests, and integrating them within the development workflow.

When I first approached the tSQLt framework it was through SQL Test, with a view to getting some tests working, as a way that (predominantly database) developers could write unit tests (rather than requiring a second language to code tests in), and so that we could automate the tests so that they were run as part of Database Continuous Integration. The method used for the latter has changed very little bar the introduction of Templates and Red Gate’s TeamCity plugin, and I’ll touch on the areas of the use of the tests that have evolved whilst I’ve been using them in a short while. First however, some lessons I’ve learned on writing the tests themselves.

Writing Tests

Creating a new test class will remove an existing test class of the same name

Sometimes, you may want to script all the tests and setup procedures in a class into one file – in my case, this is to fit a database build script for one particular piece of software. By creating the tests with a call to tSQLt.NewTestClass at the top of the script, you know that any existing tests will be deleted and so all subsequent tests in that script can be ‘create’ rather than ‘alter’ statements. This means that the script can be executed whether or not the tests already exist. It also helps to concentrate the mind when saving tests out of your development database if you know that the next create will destroy your test class!

Note – tSQLt is clever enough to know which schemas are tSQLt test classes, and which aren’t. This means your non-test objects are not in danger of being deleted.

I’ve found this a real time saver when doing automated testing, if your tests are not stored in source control.

Getting subsequent result sets

I found that occasionally I needed to write tests which interpreted not only the first result set in a series, but subsequent result sets too. Erland Sommarskog wrote a detailed text on the various ways of sharing data between stored procedures (tSQLt tests are themselves stored procedures like any other), but even with the help of that article I’d not managed to read subsequent result sets in tests in an elegant way. Recently however, I discovered the tSQLt.ResultSetFilter method, which gets a specified result set from a stored procedure – at the cost of executing it once for each result set retrieved – and thus enables stored procedures to be properly tested. When I need to test multiple result sets from one stored procedure, I usually write a number of different tests, each of which tests one of the result sets. With stored procedures that cause schema changes though, this can be fraught and you need to think through the implications of your test strategy.

As an aside – I should have known about this, as it was detailed in the manual. But, like thousands of other people, I’d dived into the product with only a passing reference to the manual!

Checking the resultant table structure

In a similar vein of not reading the documentation properly, I had been initially under the impression that AssertEqualsTable compared the structure as well as the data between two result sets. Actually in tSQLt this is split into two functions. The data is compared by AssertEqualsTable, but if you just want to compare the schema you should use AssertResultSetsHaveSameMetaData. This second function is ideal for checking that the returned data is in the expected format (column names, datatypes, etc.). This is helpful so that you can isolate data changes from format changes, and have a more descriptive failure message for your tests which helps diagnosis and resolution.

Faking Tables creates columns as nullable, even when they were non-nullable

Consider for a moment the below SQL which creates a table with a non-null column, and tries to populate it with two records, one containing a null:

IF OBJECT_ID('[Example].[MyTable]','Table') IS NOT NULL
DROP TABLE [Example].[MyTable]
/* Create sample table */
CREATE TABLE [Example].[MyTable]
/* Insert sample values */
INSERT  [Example].[MyTable]
( id, MyText )
VALUES  ( 1, 'Fred' ),
2, NULL )

As you would expect when you run this, you get an error:

Msg 515, Level 16, State 2, Line 3
Cannot insert the value NULL into column 'MyText', table 'tSQLt_Example.Example.MyTable'; column does not allow nulls. INSERT fails.

The statement has been terminated.

That’s as you would expect – null records can’t be inserted into a not null column.

However, what about if we do that as part of a tSQLt test?

/* Dave Green Aug 2012 */

/* Create test class */
EXEC tSQLt.NewTestClass @ClassName = N'Example'


IF OBJECT_ID('[Example].[MyTable]','Table') IS NOT NULL
DROP TABLE [Example].[MyTable]
/* Create sample table */
CREATE TABLE [Example].[MyTable]
CREATE PROCEDURE [Example].[test show nullability of columns in a faked table]
/* Test to illustrate how null data can be put into non-nullable columns once the table has been faked */
INTO    [Example].[Expected]
FROM    [Example].[MyTable]

EXEC tSQLt.FakeTable @TableName = N'[Example].[MyTable]', -- nvarchar(max)
@SchemaName = N'', -- nvarchar(max)
            @Identity = 1, -- bit
            @ComputedColumns = 1, -- bit
            @Defaults = 1 -- bit

        INSERT  [Example].[MyTable]
                ( id, MyText )
        VALUES  ( 1, 'Fred' ),
2, NULL )

        EXEC tSQLt.AssertEqualsTable @Expected = N'[Example].[Expected]', -- nvarchar(max)
            @Actual = N'[Example].[MyTable]', -- nvarchar(max)
            @FailMsg = N'Tables do not match' -- nvarchar(max)


EXEC tSQLt.RunTestClass @TestClassName = N'[Example]' -- nvarchar(max)'

Well, we get the output:

[Example].[test show nullability of columns in a faked table] failed: Tables do not match
|>  |1 |Fred |
|>  |2 |!NULL!|

|Test Execution Summary|

|No|Test Case Name                                               |Result |Duration|
|1 |[Example].[test show nullability of columns in a faked table]|Failure|160     |
Msg 50000, Level 16, State 10, Line 1
Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored.

Now, the test execution summary aside (as we expected the test to fail), it’s interesting that the null data was in fact inserted into the table Example.MyTable (hence the test noting that this is a row not in the empty expected table). This is certainly something to watch out for when writing tests with faked tables, as you can miss instances where nullable data should be generating an error, but the check is suppressed if you are using faked table(s).

Although it is possible to use tSQLt.ApplyConstraint to reinstate named constraints, this is not possible with a non-nullable column.

Expected and Actual tables can be created in the schema of the test class

This is somewhat obvious in hind sight, but since most test examples I came across when I was starting to use tSQLt used temporary tables for the expected and actual tables in comparisons using tSQLt.AssertEqualsTable, I too followed this convention. It is only once I read an article by Greg M Lucas which pointed out that you could use any existing schema, including test schemas to create the tables, and that they would be rolled back once the test ran anyway, that I realised this has two advantages:

  1. It keeps everything in a nice neat place, the test schema
  2. It avoids using tempdb, with any load / performance implications of that
  3. It avoids any potential collation issues if your tempdb is a different collation to the database under test.

SetUp procedures

Sometimes when you are writing a number of tests for a specific stored procedure, or class of tests, you need to perform the same setup work for each test. The answer to this repetitive typing is SetUp Procedures. These are a special stored procedure which tSQLt looks for each time it runs a test, and if it exists the stored procedure will be run immediately before the test. Once the test is complete, the test and its setup procedure are rolled back, so the setup can contain DML or other statements that you would not want to persist between runs. No destroy procedure is necessary because of this rollback, which helps to make it easier to use.

This also gives you a clean starting point for each test, so that you can be sure you haven’t got any hangovers from previous tests.

Note: Those on case sensitive collations beware – the procedure must be called MyClass.SetUp – assuming you are in the test class MyClass - otherwise it won’t get picked up by tSQLt.

You can fake views, too!

The tSQLt.FakeTable procedure is useful for more than just tables – you can also use it to fake views. This is very helpful if you have a complex series of views that the stored procedure under test uses, and you want to isolate the SP from the view – or indeed you don’t want to create all the required data for that test.

I would certainly recommend that if you were using FakeTable to isolate the dependency of a view, you remember to write tests to actually test the view - and I would also that you check the output format of the view as well as the data returned.

Note: for more complex views, SQL Server can get a little upset when parsing your test if it deems the view you are faking is non-updatable (this is because at parse time, SQL Server doesn’t recognise the effect that FakeTable will have at runtime). Whilst there is a pair of procedures (SetFakeViewOn and SetFakeViewOff) that according to this blog post bySebastian Meine (one of the authors of tSQLt) you can use when creating your test, to temporarily modify the schema of the database with an insteadof trigger to allow the test to compile. Unfortunately other than that blog post, these are very poorly documented, and so I would suggest that if you encounter issues like this you simply use dynamic SQL to do the necessary data insertions.

Using tSQLt tests in the development environment

Database Owner permissions issues

Occasionally when putting the tSQLt framework onto new databases, particularly ones owned by a user on an AD Domain, I’ve had an error message stating that SQL Server Could not obtain information about Windows NT Group / User ‘<Domain\Account Name>’. This has invariably turned out to be because the database owner is not ‘sa’ and SQL Server has issues trying to determine whether the untrusted CLR assembly which tSQLt uses can be allowed to be created in the database. There’s more information on the cause of this error here and here, but I’ve always found the simplest solution is to change the database owner to ‘sa’.


Or (for SQL 2005):

USE tSQLt_Example
EXEC sp_changedbowner 'sa'


Best practice is something that I try to push with my colleagues, both as an education means and as a way of enhancing the product so it works as well as it can. So it was with some delight that I found that George Mastros had written some of the SQLCop tests as tSQLt tests, and that Red Gate had released these as a sample with SQL Test. I’ve also found that George wrote a good deal more of them, and has released these via this stack overflow post. These represent a great starting point for writing best practice tests, and although I’ve found it necessary to pick and choose which I run, they give a good basis for further discussion within our team, and early warning of potentially sub-optimal practices.

Time measurement

tSQLt doesn’t natively measure how long each test takes to complete. This is something that is helpful when you’re doing Continuous Integration, but Dennis Lloyd (One of the main contributors to the tSQLt project) has put a script in this post on the support group that implements this feature, and I’ve added a second which allows you to export the results to a CI process.

It’s worth noting however that test measurement in tSQLt relies on datetime data types (since tSQLt is compatible with SQL 2005sp2 and above and it was only in 2008 that date and time datatypes with better precision became available) so you need to be aware of the inherent approximation of the datetime data type when looking at test execution times which may be in the millisecond range, as the error percentage can be quite high on tests which execute quickly.


I have found the tSQLt framework to integrate well with both the development environment in terms of process flow and the speed of writing tests. I have also found that the framework lends itself to Database Continuous Integration and ultimately Test Driven Development.

Whilst there’s always things that you don’t come across in the manual (especially if you haven’t read it properly!), I hope my experience of these 10 things will help save someone else a steep learning curve.

Dave Green

Author profile:

Dave has worked with Microsoft SQL Server since version 7, and is currently employed as a Database Developer. His background includes working as a systems integrator, ETL specialist and DBA. Dave is an active member of SQL South West User Group, and occasionally presents or blogs about topics which interest him. When not facing a computer screen, he can be found sailing, or walking by the sea.

Search for other articles by Dave Green

Rate this article:   Avg rating: from a total of 27 votes.





Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.

Subject: Nice write up
Posted by: gregmlucas (view profile)
Posted on: Friday, September 14, 2012 at 4:30 AM
Message: Dave, I nice summary for those (like us) who just start coding and don't always RTFM.

I thought the issue with case sensitive SetUp had been fixed as of v1.0.4357.27914 but I may be wrong.

One other advantage with using the test schema for the eactual and expected table types (as opposed to tempdb) is when working with Geography data types.

The ability to export timings for CI purposes seems like a nice enhancement and hopefully Dennis and Sebastian will include it in the next release. I know they've been busy lately but we must be due another release soon.

Subject: SetUp Case Sensitivity
Posted by: Dave_Green (view profile)
Posted on: Saturday, September 15, 2012 at 12:26 PM
Message: Hi Greg,

Thanks for your kind comments.

You're right that in that version 1.0.4357.27914 did solve the case sensitivity of SetUp to a large extent, however I've still had some problems with version 1.0.4504.21220.

I'm using the test as follows (in a database with collation Latin1_General_CS_AS):

ALTER PROC [Example].[setup] AS
CREATE TABLE Example.MyTestTable (a varchar(100),dt datetime2)
ALTER PROCEDURE [Example].[test2]
IF OBJECT_ID('Example.MyTestTable') IS NULL
EXEC tSQLt.Fail 'TODO:Implement this test.'

If I run
I get a successful test (as expected)

If I run
EXEC tSQLt.Run 'Example'
to run just the test class in question, I get a successful test (as expected)

However, if I run the specific test
EXEC tSQLt.Run 'Example.[test2]'
I find that the test fails. This is an unfortunate shortcoming.

Unfortunately, I also notice this behaviour when running the test via SQL Test, at database, test class or test level.

If you rename the stored proc [Example].[setup] to [Example].[SetUp], all the methods of calling the test listed above work as you would expect.

I believe this is because the stored proc tSQLt.Private_Run on line 44 is case sensitive when it looks for SetUp. This part of the stored procedure isn't called when running all tests, or a specific class.

I've logged this behaviour as a bug on SourceForge, together with a comment as to how to resolve the issue.

I hope that helps to clarify why I made the note in the article.



Subject: SetUp Fixed
Posted by: Dave_Green (view profile)
Posted on: Sunday, February 17, 2013 at 3:00 PM
Message: Hi Greg,

Just to let you (and anyone else reading my comment above) know, the issue with the case sensitivity of the SetUp routine was fixed in tSQLt version V1.0.4721.29450 in December 2012.



Simple-Talk Database Delivery

Patterns & Practices Library

Visit our patterns and practices library to learn more about database lifecycle management.

Find out how to automate the process of building, testing and deploying your database changes to reduce risk and make rapid releases possible.

Get started

Phil Factor
How to Build and Deploy a Database from Object-Level Source in a VCS

It is easy for someone who is developing a database to shrug and say 'if only my budget would extend to buying fancy... Read more...

 View the blog

Top Rated

Clone, Sweet Clone: Database Provisioning Made Easy?
 One of the difficulties of designing a completely different type of development tool such as SQL Clone... Read more...

Database Lifecycle Management: Deployment and Release
 So often, the unexpected delays in delivering database code are more likely to happen after the... Read more...

SSIS in SSDT: The Basics
 SQL Server Integration Services (SSIS) is a fast and reliable way of importing and exporting data, but... Read more...

SQL Server Security Audit Basics
 SQL Server Server Audit has grown in functionality over the years but it can be tricky to maintain and... Read more...

The PoSh DBA: Assigning Data to Variables Via PowerShell Common Parameters
 Sometimes, it is the small improvements in a language that can make a real difference. PowerShell is... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

Temporary Tables in SQL Server
 Temporary tables are used by every DB developer, but they're not likely to be too adventurous with... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.