Click here to monitor SSC
  • Av rating:
  • Total votes: 73
  • Total comments: 18
Allen White

Let PowerShell do an Inventory of your Servers

13 February 2009

If you run a regular and comprehensive inventory of all the servers you manage, you can solve problems more quickly and answer most questions from management. If you then repeat these reports over a time period, it helps to  to track trends such as disk space usage so you can spot trouble before it becomes a problem. Allen describes a PowerShell script to do the inventory.

Whether you're a consultant or an in-house DBA, you need to have a reliable inventory of the servers you manage. This inventory can take any number of forms but, ideally, will allow some aggregation of information.  For example, your database data and log files will grow in size over time.  By aggregating the space used by your databases, and comparing it to the available space on the physical drives, you can predict when you will need to add additional storage. When server hardware or software changes are taking place, it's easy to forget that the inventory has to be updated as well. For this reason, I like to have a script that creates the inventory, rather than have to keep a manual log of changes as they occur. PowerShell allows me to do just that.

PowerShell makes it easy to issue WMI (Windows Management Instrumentation) queries, which we can use to collect important non-SQL Server information about our servers. For example, we can keep track of the Name, Model, Manufacturer and Domain names, the operating system name, version, free physical memory, service pack information, physical memory configuration, the sizes of the locally attached disk, and the free space on each disk.

We can also, via a Server Management Objects (SMO) connection, use Powershell to gather SQL Server information such as edition, version, logins, databases, configuration and errors. As such, Powershell offers a simple way of gathering a complete inventory of your physical servers, your SQL Server instances, and the databases they house.

Win32_PingStatus: is the Server online?

The first WMI command we'll use is Win32_PingStatus, which 'pings' each of the designated servers and returns a set of results, including a StatusCode for each server. If the value of this property is zero, then the server responded successfully, and we know that we can communicate with it.

The first thing to do is create a file, called servers.txt, containing the names of the servers for which you want to produce an inventory. This is a simple text file, with one server name per line. Here's an example:


As a quick demonstration of how the Win32_PingStatus query works, here's the basic PowerShell script:

$servers = Get-Content 'servers.txt'

ForEach-Object ($server in $servers) {

   # Ping the machine to see if it's on the network

   $results = Get-WMIObject -query "select StatusCode from

Win32_PingStatus where Address = '$server'"

   $responds = $false  

   ForEach-Object ($result in $results) {

      # If the machine responds break out of the result loop and indicate success

      if ($result.statuscode -eq 0) {

         $responds = $true




         If ($responds) {

      # Gather info from the server because it responds

      Write-Output "$server responds"

   } else {

      # Let the user know we couldn't connect to the server

      Write-Output "$server does not respond"



The first line of the script reads the contents of the servers.txt file into a variable called $servers. This is a collection of the lines in the text file, and we can iterate the collection using the ForEach-Object cmdlet, as shown in line 3. Once for each server, we issue the following WMI query, using the Get-WMIObject cmdlet:

   $results = Get-WMIObject -query "select StatusCode from

Win32_PingStatus where Address = '$server'"

The result set is sent to a variable called $results. After that, we set the status of a variable called $responds to the built-in value of $false. We then iterate through the result set and set the $responds variable to $true for each server where the StatusCode property is set to 0.

Once we've broken out of the ForEach loop, we test for the value of the $responds variable. If it is set to "true", the script prints out the name the server and indicates that it is responsive. Otherwise, it names the server and indicates that it isn't responding. The results of this script (given that the SQLTBWS server is the only one on the network) are:

   SQLTBWS responds
   SQLPR01 does not respond
   SQLDEV02 does not respond

GetWMIInfo: Gathering System Information

Having established a server's responsiveness, the next thing we want to do to is to gather some system information from each server, for our inventory. PowerShell functions are a powerful way to control the work your script is doing so, rather than use in-line commands in the script, we'll create a function called GetWMIInfo. This function will query the following WMI classes to gather various bits of system information:

  • Win32_ComputerSystem – provides hardware information, such as computer name and model, number of processors, and so on
  • Win32_OperatingSystem – provides OS information, such as the OS type, service pack installed etc.
  • Win32_PhysicalMemory – provides details of the physical memory device, such as capacity etc.
  • Win32_LogicalDisk – provides details of local storage devices, including size, amount of free space etc.

First, in order to lay the groundwork, we create a directory with the name of the server, under our current directory. This is where we'll store the results we get from our inventory queries. Then, we simply call the GetWMIInfo function, passing it the name of the server to which we are connected:

   # Check to see if a directory exists for this machine, if not create one

   if (!(Test-Path -path .\$server)) {

      New-Item .\$server\ -type directory


   # Get the server info

   getwmiinfo $server

Next, here is the definition of the GetWMIInfo function (PowerShell is an interpreted language, so any functions must be defined before they're called in your script):

function getwmiinfo ($svr) {

   # Get ComputerSystem info and write it to a CSV file

   gwmi -query "select * from

       Win32_ComputerSystem" -computername $svr | select Name,

       Model, Manufacturer, Description, DNSHostName,

       Domain, DomainRole, PartOfDomain, NumberOfProcessors,

       SystemType, TotalPhysicalMemory, UserName,

       Workgroup | export-csv -path .\$svr\BOX_ComputerSystem.csv -noType

   # Get OperatingSystem info and write it to a CSV file

   gwmi -query "select * from

       Win32_OperatingSystem" -computername $svr | select Name,

       Version, FreePhysicalMemory, OSLanguage, OSProductSuite,

       OSType, ServicePackMajorVersion, ServicePackMinorVersion |

       export-csv -path .\$svr\BOX_OperatingSystem.csv -noType

   # Get PhysicalMemory info and write it to a CSV file

   gwmi -query "select * from

       Win32_PhysicalMemory" -computername $svr | select Name,

       Capacity, DeviceLocator, Tag |

       export-csv -path .\$svr\BOX_PhysicalMemory.csv -noType

   # Get LogicalDisk info and write it to a CSV file

   gwmi -query "select * from Win32_LogicalDisk

       where DriveType=3" -computername $svr | select Name, FreeSpace,

       Size | export-csv -path .\$svr\BOX_LogicalDisk.csvnoType


Each call to Get-WMIObject (aliased here as gwmi) selects all properties from each WMI class, and then the results are piped to a select (technically the select-object cmdlet, here aliased as "select"), listing the properties I want.

Finally, each result set is piped to the Export-CSV cmdlet, which sends the results to a comma-separated file, the name and location of which is specified by the value after the –path parameter. I use CSV files because it means that I can run this script at a client site without needing to create a database somewhere on their system to store this information. I can use Excel to look at the individual CSV files, or I can build an Integration Services package to build a database on my own system, for later aggregation.

Note that for the last class, Win32_LogicalDisk, I only select the results where DriveType=3, which means that I only collect information for locally attached disk drives. This way I don't get network drives or CD/DVD drives. However, it does return information on SAN-attached drives, which is important.

SMO and PowerShell: Gathering SQL Server Information

Now we're ready to start gathering SQL Server information. We'll do this by creating a PowerShell function called GetSQLInfo, which will gather the inventory information about our SQL Server instances, such as version, edition, build number, configuration, session and lock information, and logins, as well as information about each database, including the physical file names and locations, and error log contents.

The first thing we need to do, at the very top of our script, is to load the SMO (Server Management Object) assembly.

 [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null

For SQL Server 2005 and later, we can use the ManagedComputer object in SMO to return the set of instances on the server. However, if you have any SQL Server 2000 instances in your environment, as I do, this will not work. Instead we'll modify the servers.txt file to list the server and instance names, like this:


The first value (before the comma) is the computer name, and the value after that is the name of the SQL Server instance from which we want to gather information. The third line indicates we're requesting information from the default instance of SQL Server. As we iterate through each server, using ForEach-Object, we separate out the server and instance names as follows:

foreach ($sv in $servers) {

   # Separate the server and instance names

   $srvr = $prcs.Split(",")

   $server = $srvr[0]

   $instance = $srvr[1]

Then, we call our GetSQLInfo function, passing in the server and instance details:

   getsqlinfo $server $instance

Let's take a look at the definition of the GetSQLInfo function. Because we have two parameters, the server name and the instance name, we use the param block method of defining our parameters, as follows:

function getsqlinfo {

   param (




We need an SMO connection to the instance to gather management information from the instance, but we also need an ADO.Net connection to run queries to get configuration and session information. We make those connections as follows:

# Create an ADO.Net connection to the instance

$cn = new-object

"Data Source=$inst;Integrated Security=SSPI;Initial Catalog=master");

# Create an SMO connection to the instance

$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $inst

Next, we need to grab the instance name, without including the machine name, so that we can assign it to the name of the output CSV files. If the instance is the default instance we'll set it, as SQL Server does, to MSSQLSERVER:

# Extract the specific instance name, and set it to MSSQLSERVER if it's the default instance

$nm = $inst.Split("\")

if ($nm.Length -eq 1) {

   $instnm = "MSSQLSERVER"

} else {

   $instnm = $nm[1]


We're now ready to start gathering information from each server. SMO provides an Information class, which contains valuable information about a given SQL Server instance such as the edition, the SQL Server version, as well as the path to the error log and the master database and log files. Therefore, the first thing we do is extract this data from the Information class, and pipe it directly to its own CSV file:

# Set the CSV output file name and pipe the instances Information collection to it

$outnm = ".\" + $svr + "\" + $instnm + "_GEN_Information.csv"

$s.Information | export-csv -path $outnmnoType

Next, we want to get the configuration settings, the current session information, and any locks present at this time. The easiest way to do this is to execute the sp_configure, sp_who and sp_lock system procedures, via ADO.Net. Remember, we're running the same script whether we're gathering information from SQL Server 2000, SQL Server 2005 or SQL Server 2008.  We can use SMO methods to gather this information, but it's quite verbose, and each new version of SQL Server adds additional configuration options. By using sp_configure to return a result set, we can use the same query no matter which version of SQL Server we're collecting data from.

First, we need to enable the ShowAdvancedOptions option of the SMO Configuration object so we get all the configuration options, not just the minimal set.  We do this as follows:

# Set ShowAdvancedOptions ON for the query

$s.Configuration.ShowAdvancedOptions.ConfigValue = 1


We use an ADO.Net DataSet object because this will allows us to execute all three system procedures at one time. So, first we need to create the DataSet object, then build the three queries into a text string, then use a SqlDataAdapter object to fill the DataSet:

# Create a DataSet for our configuration information

$ds = new-object "System.Data.DataSet" "dsConfigData"

# Build our query to get configuration, session and lock info, and execute it

$q = "exec sp_configure;


$q = $q + "exec sp_who;


$q = $q + "exec sp_lock;


$da = new-object "System.Data.SqlClient.SqlDataAdapter" ($q, $cn)


The query contains three result sets, so the DataSet has three DataTables within it. By defining three DataTable objects, and setting each of those objects to each of the three result sets, we can then export the results of each set of inventory queries to the appropriate CSV files:

# Build datatables for the config data, load them from the query results, and write them to CSV files

$dtConfig = new-object "System.Data.DataTable" "dtConfigData"

$dtWho = new-object "System.Data.DataTable" "dtWhoData"

$dtLock = new-object "System.Data.DataTable" "dtLockData"

$dtConfig = $ds.Tables[0]

$dtWho = $ds.Tables[1]

$dtLock = $ds.Tables[2]

$outnm = ".\" + $svr + "\" + $instnm + "_GEN_Configure.csv"

$dtConfig | select name, minimum, maximum, config_value, run_value | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_GEN_Who.csv"

$dtWho | select spid, ecid, status, loginame, hostname, blk, dbname, cmd, request_id | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_GEN_Lock.csv"

$dtLock | select spid, dbid, ObjId, IndId, Type,Resource, Mode, Status | export-csv -path $outnmnoType

Being good citizens, when we're done gathering the configuration information we turn Show Advanced Options back off.

# Set ShowAdvancedOptions OFF now that we're done with Config

$s.Configuration.ShowAdvancedOptions.ConfigValue = 0


We also want to know the logins on the server. We can retrieve this information from the Logins collection of the SMO Server object:

# Write the login name and default database for Logins to a CSV file

$outnm = ".\" + $svr + "\" + $instnm + "_GEN_Logins.csv"

$s.Logins | select Name, DefaultDatabase | export-csv -path $outnmnoType

Finally, we can gather information about each database on the instance. First we create a CSV file in which to store the general information for all the databases on the instance:

# Write information about the databases to a CSV file

$outnm = ".\" + $svr + "\" + $instnm + "_GEN_Databases.csv"

$dbs = $s.Databases

$dbs | select Name, Collation, CompatibilityLevel, AutoShrink,

   RecoveryModel, Size, SpaceAvailable | export-csv -path $outnmnoType

Then, we iterate through each database gathering the inventory data. If the IsSystemObject property is set on (which it will be for master, model, msdb and tempdb) we indicate that the database is a system database, using SDB in the name, and if not we indicate it's a user database by using UDB in the name:

foreach ($db in $dbs) {

   # Write the information about the physical files used by the database to CSV files for each database

   $dbname = $db.Name

   if ($db.IsSystemObject) {

      $dbtype = "_SDB"

   } else {

      $dbtype = "_UDB"


Now to the specific inventory data. We first get the list of users in this database:

   # Write the user information to a CSV file

   $users = $db.Users

   $outnm = ".\" + $svr + "\" + $instnm + $dbtype + "_" +

       $dbname +    "_Users.csv"

   $users | select $dbname, Name, Login, LoginType, UserType, CreateDate |

       export-csv -path $outnm -noType

We need to iterate through the filegroups to get the physical data file information, and then use the Logs collection for the database to get the physical log file information:

   $fgs = $db.FileGroups

   foreach ($fg in $fgs) {

      $files = $fg.Files

      $outnm = ".\" + $svr + "\" + $instnm + $dbtype

 + "_" + $dbname + "_DataFiles.csv"

      $files | select $db.Name, Name, FileName, Size,

 UsedSpace | export-csv -path $outnm -noType


   $logs = $db.LogFiles

   $outnm = ".\" + $svr + "\" + $instnm + $dbtype

 + "_" + $dbname + "_LogFiles.csv"

   $logs | select $db.Name, Name, FileName, Size, UsedSpace |

 export-csv -path $outnm -noType


Finally, we export the current error log and the last six error logs out to CSV files so we can check to see if anything is amiss:

# Create CSV files for each ErrorLog file

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog.csv"

$s.ReadErrorLog() | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog_1.csv"

$s.ReadErrorLog(1) | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog_2.csv"

$s.ReadErrorLog(2) | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog_3.csv"

$s.ReadErrorLog(3) | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog_4.csv"

$s.ReadErrorLog(4) | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog_5.csv"

$s.ReadErrorLog(5) | export-csv -path $outnm -noType

$outnm = ".\" + $svr + "\" + $instnm + "_ERL_ErrorLog_6.csv"

$s.ReadErrorLog(6) | export-csv -path $outnmnoType

This collection of CSV files can then be imported into a database for analysis and reporting. It also serves as a basis for starting the process of gathering performance baseline information. In any event, it's useful information to have, whether you're an in-house DBA or a consultant.


Microsoft provides a tool called SQLDiag to gather all this information but it needs to be run one time for each server for which you want the detailed information. It places its results in a text file in the instances LOG directory (along with the ERRORLOG files).

The PowerShell script provided in this article only needs to be run once in order to gathering inventory information from all of your servers. Furthermore,  it not only gathers most of the same information with the one execution, but it puts the information in a single directory (well, one sub-directory for each physical server), so that loading the information into a database for aggregation and analysis is much easier.

From a management perspective, it's important to have a comprehensive inventory of the servers you manage, so if problems or questions come up from users or management you can answer them quickly and with confidence. In addition, it helps to be able to track trends such as disk space usage so problems are anticipated.  This script will help you do that.

Allen White

Author profile:

Allen is a Practice Leader for Upsearch in Northeast Ohio. He's been both a developer and an administrator so understands both perspectives towards database technology, and loves sharing his experiences and helping people learn how to use SQL Server. Allen has spent over 35 years in IT and has been using SQL Server since 1992 and is certified MCITP in SQL Server and MCT. Allen has been awarded Microsoft’s MVP Award for the last six years. He's President of the Ohio North SQL Server User's Group and maintains a blog at

Search for other articles by Allen White

Rate this article:   Avg rating: from a total of 73 votes.





Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.

Subject: Awesome, as always
Posted by: Buck Woody (not signed in)
Posted on: Monday, February 16, 2009 at 10:48 AM
Message: Allen, you're a wonder. Keep up the great work. I've bookmarked this one and will digest it slowly.

Subject: Nice Article, Could I offer a suggestion?
Posted by: Steven.Murawski (view profile)
Posted on: Wednesday, February 18, 2009 at 9:15 AM
Message: It's great to see PowerShell content applied to SQL Server management.

I would like to offer one critique. One of the great things about Powershell is the ability to express things concisely, in a very readable syntax. The way you check for live servers could be streamlined a bit and made more readable. The options below don't report out a status like yours (though they could with an easy modification, but they return the server names that were pingable and which can be used as input for one of your next inventory scripts.

Option 1: A one-liner (as the PowerShell fans like to say)
Get-Content servers.txt | ForEach-Object {Get-WmiObject -Query "SELECT StatusCode, Address FROM Win32_PingStatus WHERE Address='$_' AND StatusCode = 0"} | Select-Object -ExpandProperty Address

Option 2: A one-liner shortened with aliases for easy entry
gc servers.txt | % {gwmi -Q "SELECT StatusCode, Address FROM Win32_PingStatus WHERE Address='$_' AND StatusCode = 0"} | Select -exp Address

Option 3: A more concise foreach loop

foreach ($server in Get-Content servers.txt)
Get-WmiObject -Query "SELECT Address FROM Win32_PingStatus WHERE Address='$server' AND StatusCode = 0" | Select-Object -ExpandProperty Address

Kudos on dipping in to PowerShell!
Steve Murawski
Community Co-Director - PowerShellCommunity.Org

Subject: Suggestions are how we learn!
Posted by: Allen White (view profile)
Posted on: Wednesday, February 18, 2009 at 9:44 AM
Message: Thanks, Steve. I just recently learned about the Win32_PingStatus class, so more information is always great to have.

BTW, in my explanations I used the ForEach-Object cmdlet in a way that's only appropriate to the foreach language construct. The downloadable script is correct but the examples above aren't.

For gathering the servers, the language construct is used, as such:

foreach ($server in $servers) {scriptblock}

The cmdlet is used in the pipeline this way:

$dbs | ForEach-Object {scriptblock}

Hopefully that'll clear up any confusion.


Subject: SQL Server 2000
Posted by: Dustin Jones (not signed in)
Posted on: Wednesday, February 18, 2009 at 9:51 AM
Message: Can you run the powershell script from a sql server 2005 machine running windows server 2008 against servers running windows server 2000 / sql server 2000?

Subject: Platform support
Posted by: Allen White (view profile)
Posted on: Wednesday, February 18, 2009 at 10:45 AM
Message: This script can be run from any server where PowerShell 1.0 and the SQL 2005 or SQL 2008 object libraries are installed.

It will collect data from SQL 2000, 2005 and 2008 servers.


Subject: ForEach-Object
Posted by: jerryhung (view profile)
Posted on: Wednesday, February 18, 2009 at 1:14 PM
Message: Weird, I had to remove the -Object from ForEach-Object for my pingServers.ps1 to work

Otherwise it errors out with
Unexpected token 'in' in expression or statement.
At C:\PowerShell\pingServers.ps1:3 char:27
+ ForEach-Object ($server in <<<< $servers) {

Subject: SQL Server 2000
Posted by: Dustin Jones (not signed in)
Posted on: Wednesday, February 18, 2009 at 2:20 PM
Message: Can you run the powershell script from a sql server 2005 machine running windows server 2008 against servers running windows server 2000 / sql server 2000?

Subject: Re: SQL Server 2000
Posted by: Allen White (view profile)
Posted on: Wednesday, February 18, 2009 at 7:25 PM
Message: Dustin,

If you install PowerShell 1.0 (I'm not sure if it's automatically installed on Windows Server 2008) you can run the script against SQL Server 2000 servers and SQL Server 2005 servers.


Subject: ForEach-Object
Posted by: Allen White (view profile)
Posted on: Wednesday, February 18, 2009 at 7:30 PM
Message: Jerry,

The foreach ($server in $servers) works because it's a language construct, not an alias for the ForEach-Object cmdlet. If I'd written the code as

$servers | foreach-object {
$server = $_
# rest of the code here

This would work using the cmdlet.

Make sense?


Subject: foreach vs foreach-object
Posted by: Anonymous (not signed in)
Posted on: Thursday, February 19, 2009 at 11:37 AM
Message: It sort of makes sense. I still am not getting the distinction. Was it truly an error in the script, or can the script as written be invoked some other way without errors?

Also, why bother w/ ForEach-Object at all if a) foreach is a built-in language construct and b) writing it using foreach-object is (IMHO) less intuitive?

I am not that experienced with Powershell at all, I am just seeking clarification, please.

Subject: D'oh!
Posted by: Anonymous (not signed in)
Posted on: Thursday, February 19, 2009 at 11:38 AM
Message: Ha ha, never mind, you already answered my question in the comments. One day I'll learn not to skim so much.

Subject: PowerShell Primer
Posted by: Wayne West (not signed in)
Posted on: Friday, February 20, 2009 at 1:10 PM
Message: There's a nice little "get your feet wet" primer at Apparently it will be a series, it will be interesting to see how it develops.

If you attended PASS last November in Seattle, there were two or three good PowerShell sessions that you can view online or on the DVD set.

Subject: PowerGUI free PowerShell tool
Posted by: jerryhung (view profile)
Posted on: Monday, February 23, 2009 at 11:04 AM
Message: Found this today via Brent Ozard at Quest

It provides GUI for PowerShell! woo hoo!

Subject: Inventoring Servers in parallel.
Posted by: yup (view profile)
Posted on: Monday, March 23, 2009 at 2:23 AM
Message: Looping through the list will take up some time which we can cut if there is an option to multithread. Is there a way to do that easily?

Subject: errors
Posted by: wheatshocker (view profile)
Posted on: Monday, July 13, 2009 at 5:32 PM
Message: I tried to run the first script by copy-pasting to my PS command prompt after placing servers.txt file in directory where prompt starts and got this error message:
"unexpected token 'in' in expression or statement" and "unexpected token '}' in expression or statement".

Cannot find path 'C:\1_dat\Download\Tools_SQL\PowerShell_SQLInv\servers.txt' because it does not exist.
At :line:145 char:22
+ $servers = get-content <<<< 'servers.txt'

What am I doing wrong?


Subject: access denied
Posted by: wheatshocker (view profile)
Posted on: Tuesday, July 14, 2009 at 2:39 PM
Message: Doesn't the script require that the user be a member of administrator group on remote servers in order for it to be able to gain access and gather information?

I got an 'access denied' error on a server that I do not have admin rights on.


Subject: Insert to sql database
Posted by: SqlNoob (view profile)
Posted on: Saturday, June 5, 2010 at 5:51 PM
Message: Now I've got these .csv's I'd like to dump them to a database for future reference. Is there a way to extend the PowerShell script to do so or any other suggestions?

Subject: SQL Inventory
Posted by: crazydba (view profile)
Posted on: Friday, April 11, 2014 at 2:16 PM
Message: Hi Allen,

I worked on a similar solution to collect Windows and SQL Server inventory by gathering scripts from your posts and other SQL Family and released it as a free tool at The tool uses powershell to grab data from all servers and stores it in SQL tables and can be viewed using SSRS reports. Can you please check it and give me any suggestions to improve it?


Simple-Talk Database Delivery

Patterns & Practices Library

Visit our patterns and practices library to learn more about database lifecycle management.

Find out how to automate the process of building, testing and deploying your database changes to reduce risk and make rapid releases possible.

Get started

Phil Factor
Routine SQL DML Testing for the Unenthusiastic Tester

There are more exciting things in life than unit testing SQL Statements, checking the results, timings, and... Read more...

 View the blog

Top Rated

Database Continuous Integration
 Have you ever longed for a way of making the delivery of databases more visible, predictable and... Read more...

Releasing Databases in VSTS with Redgate SQL CI and Octopus Deploy
 You can still do Database Lifecycle Management (DLM) workflows in the hosted version of Team foundation... Read more...

Five Online Database Modelling Services
 To design, or redesign, a database of any complexity, the Entity-Relationship modelling tool becomes... Read more...

Using R Machine Learning Script as a Power BI Desktop Data Source
 You can develop a Power BI Dashboard that uses an R machine learning script as its data source and... Read more...

Connected Development with SQL Server Data Tools
 Although SQL Server Data Tools (SSDT) encourages 'disconnected' database development, it also provides... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

Temporary Tables in SQL Server
 Temporary tables are used by every DB developer, but they're not likely to be too adventurous with... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.