Ten Things I Wish I’d Known When I Started Using tSQLt and SQL Test

11 September 2012
by Dave Green

The open-source Unit Test framework tSQLt is a great way of writing unit tests in the same language as the one being tested. In retrospect,  after using  tSQLt for a while, what are the 'gotchas'; those things that you'd have been better off knowing about before you get started? David Green lists a few tips he wished he'd read beforehand.

It’s been about a year now since I started using the tSQLt framework, and the Red Gate Add-in which makes it integrated into SQL Server Management Studio, SQL Test. With that in mind, it seemed an opportune moment to share what I’ve learnt, both in terms of writing tSQLt unit tests, and integrating them within the development workflow.

When I first approached the tSQLt framework it was through SQL Test, with a view to getting some tests working, as a way that (predominantly database) developers could write unit tests (rather than requiring a second language to code tests in), and so that we could automate the tests so that they were run as part of Database Continuous Integration. The method used for the latter has changed very little bar the introduction of Templates and Red Gate’s TeamCity plugin, and I’ll touch on the areas of the use of the tests that have evolved whilst I’ve been using them in a short while. First however, some lessons I’ve learned on writing the tests themselves.

Writing Tests

Creating a new test class will remove an existing test class of the same name

Sometimes, you may want to script all the tests and setup procedures in a class into one file – in my case, this is to fit a database build script for one particular piece of software. By creating the tests with a call to tSQLt.NewTestClass at the top of the script, you know that any existing tests will be deleted and so all subsequent tests in that script can be ‘create’ rather than ‘alter’ statements. This means that the script can be executed whether or not the tests already exist. It also helps to concentrate the mind when saving tests out of your development database if you know that the next create will destroy your test class!

Note – tSQLt is clever enough to know which schemas are tSQLt test classes, and which aren’t. This means your non-test objects are not in danger of being deleted.

I’ve found this a real time saver when doing automated testing, if your tests are not stored in source control.

Getting subsequent result sets

I found that occasionally I needed to write tests which interpreted not only the first result set in a series, but subsequent result sets too. Erland Sommarskog wrote a detailed text on the various ways of sharing data between stored procedures (tSQLt tests are themselves stored procedures like any other), but even with the help of that article I’d not managed to read subsequent result sets in tests in an elegant way. Recently however, I discovered the tSQLt.ResultSetFilter method, which gets a specified result set from a stored procedure – at the cost of executing it once for each result set retrieved – and thus enables stored procedures to be properly tested. When I need to test multiple result sets from one stored procedure, I usually write a number of different tests, each of which tests one of the result sets. With stored procedures that cause schema changes though, this can be fraught and you need to think through the implications of your test strategy.

As an aside – I should have known about this, as it was detailed in the manual. But, like thousands of other people, I’d dived into the product with only a passing reference to the manual!

Checking the resultant table structure

In a similar vein of not reading the documentation properly, I had been initially under the impression that AssertEqualsTable compared the structure as well as the data between two result sets. Actually in tSQLt this is split into two functions. The data is compared by AssertEqualsTable, but if you just want to compare the schema you should use AssertResultSetsHaveSameMetaData. This second function is ideal for checking that the returned data is in the expected format (column names, datatypes, etc.). This is helpful so that you can isolate data changes from format changes, and have a more descriptive failure message for your tests which helps diagnosis and resolution.

Faking Tables creates columns as nullable, even when they were non-nullable

Consider for a moment the below SQL which creates a table with a non-null column, and tries to populate it with two records, one containing a null:

IF OBJECT_ID('[Example].[MyTable]','Table') IS NOT NULL
DROP TABLE [Example].[MyTable]
/* Create sample table */
CREATE TABLE [Example].[MyTable]
/* Insert sample values */
INSERT  [Example].[MyTable]
( id, MyText )
VALUES  ( 1, 'Fred' ),
2, NULL )

As you would expect when you run this, you get an error:

Msg 515, Level 16, State 2, Line 3
Cannot insert the value NULL into column 'MyText', table 'tSQLt_Example.Example.MyTable'; column does not allow nulls. INSERT fails.

The statement has been terminated.

That’s as you would expect – null records can’t be inserted into a not null column.

However, what about if we do that as part of a tSQLt test?

/* Dave Green Aug 2012 */

/* Create test class */
EXEC tSQLt.NewTestClass @ClassName = N'Example'


IF OBJECT_ID('[Example].[MyTable]','Table') IS NOT NULL
DROP TABLE [Example].[MyTable]
/* Create sample table */
CREATE TABLE [Example].[MyTable]
CREATE PROCEDURE [Example].[test show nullability of columns in a faked table]
/* Test to illustrate how null data can be put into non-nullable columns once the table has been faked */
INTO    [Example].[Expected]
FROM    [Example].[MyTable]

EXEC tSQLt.FakeTable @TableName = N'[Example].[MyTable]', -- nvarchar(max)
@SchemaName = N'', -- nvarchar(max)
            @Identity = 1, -- bit
            @ComputedColumns = 1, -- bit
            @Defaults = 1 -- bit

        INSERT  [Example].[MyTable]
                ( id, MyText )
        VALUES  ( 1, 'Fred' ),
2, NULL )

        EXEC tSQLt.AssertEqualsTable @Expected = N'[Example].[Expected]', -- nvarchar(max)
            @Actual = N'[Example].[MyTable]', -- nvarchar(max)
            @FailMsg = N'Tables do not match' -- nvarchar(max)


EXEC tSQLt.RunTestClass @TestClassName = N'[Example]' -- nvarchar(max)'

Well, we get the output:

[Example].[test show nullability of columns in a faked table] failed: Tables do not match
|>  |1 |Fred |
|>  |2 |!NULL!|

|Test Execution Summary|

|No|Test Case Name                                               |Result |Duration|
|1 |[Example].[test show nullability of columns in a faked table]|Failure|160     |
Msg 50000, Level 16, State 10, Line 1
Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored.

Now, the test execution summary aside (as we expected the test to fail), it’s interesting that the null data was in fact inserted into the table Example.MyTable (hence the test noting that this is a row not in the empty expected table). This is certainly something to watch out for when writing tests with faked tables, as you can miss instances where nullable data should be generating an error, but the check is suppressed if you are using faked table(s).

Although it is possible to use tSQLt.ApplyConstraint to reinstate named constraints, this is not possible with a non-nullable column.

Expected and Actual tables can be created in the schema of the test class

This is somewhat obvious in hind sight, but since most test examples I came across when I was starting to use tSQLt used temporary tables for the expected and actual tables in comparisons using tSQLt.AssertEqualsTable, I too followed this convention. It is only once I read an article by Greg M Lucas which pointed out that you could use any existing schema, including test schemas to create the tables, and that they would be rolled back once the test ran anyway, that I realised this has two advantages:

  1. It keeps everything in a nice neat place, the test schema
  2. It avoids using tempdb, with any load / performance implications of that
  3. It avoids any potential collation issues if your tempdb is a different collation to the database under test.

SetUp procedures

Sometimes when you are writing a number of tests for a specific stored procedure, or class of tests, you need to perform the same setup work for each test. The answer to this repetitive typing is SetUp Procedures. These are a special stored procedure which tSQLt looks for each time it runs a test, and if it exists the stored procedure will be run immediately before the test. Once the test is complete, the test and its setup procedure are rolled back, so the setup can contain DML or other statements that you would not want to persist between runs. No destroy procedure is necessary because of this rollback, which helps to make it easier to use.

This also gives you a clean starting point for each test, so that you can be sure you haven’t got any hangovers from previous tests.

Note: Those on case sensitive collations beware – the procedure must be called MyClass.SetUp – assuming you are in the test class MyClass - otherwise it won’t get picked up by tSQLt.

You can fake views, too!

The tSQLt.FakeTable procedure is useful for more than just tables – you can also use it to fake views. This is very helpful if you have a complex series of views that the stored procedure under test uses, and you want to isolate the SP from the view – or indeed you don’t want to create all the required data for that test.

I would certainly recommend that if you were using FakeTable to isolate the dependency of a view, you remember to write tests to actually test the view - and I would also that you check the output format of the view as well as the data returned.

Note: for more complex views, SQL Server can get a little upset when parsing your test if it deems the view you are faking is non-updatable (this is because at parse time, SQL Server doesn’t recognise the effect that FakeTable will have at runtime). Whilst there is a pair of procedures (SetFakeViewOn and SetFakeViewOff) that according to this blog post bySebastian Meine (one of the authors of tSQLt) you can use when creating your test, to temporarily modify the schema of the database with an insteadof trigger to allow the test to compile. Unfortunately other than that blog post, these are very poorly documented, and so I would suggest that if you encounter issues like this you simply use dynamic SQL to do the necessary data insertions.

Using tSQLt tests in the development environment

Database Owner permissions issues

Occasionally when putting the tSQLt framework onto new databases, particularly ones owned by a user on an AD Domain, I’ve had an error message stating that SQL Server Could not obtain information about Windows NT Group / User ‘<Domain\Account Name>’. This has invariably turned out to be because the database owner is not ‘sa’ and SQL Server has issues trying to determine whether the untrusted CLR assembly which tSQLt uses can be allowed to be created in the database. There’s more information on the cause of this error here and here, but I’ve always found the simplest solution is to change the database owner to ‘sa’.


Or (for SQL 2005):

USE tSQLt_Example
EXEC sp_changedbowner 'sa'


Best practice is something that I try to push with my colleagues, both as an education means and as a way of enhancing the product so it works as well as it can. So it was with some delight that I found that George Mastros had written some of the SQLCop tests as tSQLt tests, and that Red Gate had released these as a sample with SQL Test. I’ve also found that George wrote a good deal more of them, and has released these via this stack overflow post. These represent a great starting point for writing best practice tests, and although I’ve found it necessary to pick and choose which I run, they give a good basis for further discussion within our team, and early warning of potentially sub-optimal practices.

Time measurement

tSQLt doesn’t natively measure how long each test takes to complete. This is something that is helpful when you’re doing Continuous Integration, but Dennis Lloyd (One of the main contributors to the tSQLt project) has put a script in this post on the support group that implements this feature, and I’ve added a second which allows you to export the results to a CI process.

It’s worth noting however that test measurement in tSQLt relies on datetime data types (since tSQLt is compatible with SQL 2005sp2 and above and it was only in 2008 that date and time datatypes with better precision became available) so you need to be aware of the inherent approximation of the datetime data type when looking at test execution times which may be in the millisecond range, as the error percentage can be quite high on tests which execute quickly.


I have found the tSQLt framework to integrate well with both the development environment in terms of process flow and the speed of writing tests. I have also found that the framework lends itself to Database Continuous Integration and ultimately Test Driven Development.

Whilst there’s always things that you don’t come across in the manual (especially if you haven’t read it properly!), I hope my experience of these 10 things will help save someone else a steep learning curve.

© Simple-Talk.com