Gathering Perfmon Data with Powershell

When you have to routinely collect data from Performance Monitor Counters, it soon becomes easier and more convenient to use PowerShell. SQL Server MVP Laerte Junior was inspired to create a script, and guides us through its useful functions.

I was reading an excellent article by Allen White (Twitter|Blog) on Performance Data Gathering (to which I give all the credit for inspiring this module), and that’s when a PowerShell apple fell on my head: “Why not make a function that facilitates this?” As a DBA, one of my almost-daily tasks is to capture and analyze Perfmon counters. You may wonder to yourself, “but you can do that with the get-counter cmdlet in PowerShell 2.0.

Yes you can, and it is very helpful. However, I need multiple counters, and the results displayed in line with all values separated by commas (which is usually used to facilitate insertion into SQL Server), and that’s when get-counter gets a little trickier to use. I tried to think of an easy way to choose which counters you want, save this configuration for later use, and then insert the output data into a SQL Server table for further analysis. As a result of my tinkering, I believe I’ve got something pretty useful to share with you.

Before I go on, I want to give a shout out to the people who helped me solve some issues I encountered; as always, my friend MVP Shay Levy (Twitter|Blog), and also MVP JVierra , Trevor Sullivan (Twitter|Blog), and Lauri Kotilainen (Twitter|Blog) . A special thank-you goes to MVP Pinal Dave (Twitter|Blog), who invited me to be a guest blogger on, and MVP Grant Fritchey (Twitter|Blog) for the kind words on his blog.

This article will cover how to use this new module in a variety of situations, with some clear and every-day examples to hopefully make everything obvious. To start with, I’ll walk you through how to find out more information on the various Perfmon counters, so that you can choose which ones you want to use, as well as how to save that configuration so that you can reuse it later (and on different servers). Once we’ve covered that, we’ll take a look at how to get the data collected, and two ways to store it once we’ve got it.

Performance Counters

A polished version of this whole module will be available in the next release of SQLPSX – SQL Server PowerShell Extensions. For now, you can download it from the top of this article, and I should point out that this module is a V1, so it might (and probably will) have some issues, and you can contact me anytime if you need a hand. Alternatively, you can use the built-in help to see all of the parameters and some examples:

All tests were done on two Hyper-V 64-bit Virtual Machines;  a Windows 2008 Server R2 domain controller with SQL Server 2008 R2 (Obiwan), and a Windows 7 machine with SQL Server 2008 (Chewie). Before you get started, if you want to work with multiple servers, you need to enable the RemoteRegistry Service in the remote machines , as you can see :


Figure 1.Ensuring the RemoteRegistry Service is active on the remote machines.

For further reading on why this is necessary, I suggest you take a look at “Why run the RemoteRegistry Service?” by Brian Groth.

Top Tip:
If you want to know which information (properties) are returned by a given functions, type:
Function | get-member -membertype noteproperty.

For example:
Get-PerfCounterCategory | Get-Member -MemberType NoteProperty

Finding the Information

The thing that most bothered me at the start of this project was that, for some counters in Perfmon, I knew their names but could not remember exactly which category they were in. I’ll give you a few examples. Let’s say you remember that Buffer Cache Hit Ratio is in one of the SQL Server categories, but which one? We know that it’s painful to look in IDE for the answer, so now it’s much easier; if you want to discover all the registered categories, you can use this command:

Alternatively, if you want to be more fine-grained and only discover the categories starting with “SQLServer“, ordering by Category Name, then use this command:


Figure 2. Results for the Get-PerfCounterCategory cmdlet.

Each Performance Counter category has a number of instances, or it can have just a single instance. For example, with the Processor Counter, you can have one instance for each processor, and with Buffer Manager you have only a single instance. You can see the information for, as an example, all instances in the Processor category by typing:

What’s that I hear you ask? Can you get all counters in all instances and categories? Yes, of course you can… and with some help information about each counter, too:


Figure 3. All counters in all instances and categories, with information.

How about if you want to see all the counters from Buffer Manager category? Just use:

And if I want to work with multiple servers? Simple, just pipe the servers names into the cmdlet:

… Or use a flat text file with the servers’ names inside it:

As you can see , we can perform the search for missing information in various ways, facilitating the process of selecting the appropriate counters .

Setting an XML Configure File

When I started writing this module, my big goal was that I should somehow be able to be mobile with the counters that I chose. That is, I should be able to save and then later use  the same configurations as easily within the original server as any another. After thinking about it, I decided that the gathering of data should start with reading an XML configuration file containing all the counters I’m interested in, and that I should have one file for memory counters, one for processor counters, and so on, to segment the data gathering .

It seems like it might be a really complicated solution, especially when XML is involved, but thankfully it’s not too bad, as you’ll see in a moment. To start with, creating an XML file with all the counters from the Processor category and _Total instance is as simple as running:

If you then look in your C:\temp folder, an XML file called TemplateProcessor_MACHINENAME.XML will be there, ready to be used.

Top Tip
To facilitate the ability to identify and use multiple servers in your data-gathering process, the name of the target machine is added to the name of both the output file and the XML config file.


Figure 4. Generating an XML file containing the Perfmon configuration

As you can see, in this example I used the -NewFileswitch parameter which, naturally, creates a new .XML file. Perhaps you’re wondering whether there are situations where I would not use this parameter? Let’s say we have a file specifying just the Buffer Cache Hit Ratio counter from the Buffer Manager category:

However, now we need to add the Page Life Expectancy counter to the already-existing file, so we just omit the  -NewFile  parameter, and the selected counters will be added to the XML file, rather than overwriting it:

To create the configuration file in such a way that multiple servers can use it, we just need to specify the desired machines…

… and a separate file will be created for each server, using the name passed in the -PathConfigFile parameter  and adding the server name:


Figure 5. Creating configuration files for multiple servers.

Gathering Data

With all that set up, the next step, starting to actually gather information, is a bit more complicated. Let’s take a look:

To start with, the command below allows us to gather data using the counters defined in C:\temp\TemplateBufferManager_Obiwan.XML , starting the collection job on 05/24/2010 08:00:00 AM, ending it on 05/30/2010 22:00:00 PM, with an interval of 10 seconds  between each data collection, and outputting the values to C:\temp\ TemplateBufferManager.txt:

When you run this command, you will notice that the PowerShell session will be locked; the function is in a loop to gather the data, and while this loop is running the session will be locked. Thankfully, we can resolve this by simply adding the -RunAsJobparameter, which tells PowerShell to perform this procedure asynchronously:

As you may have guessed, this parameter creates a Job, and when you’re working with Jobs there are some things you have to take into consideration:  
The job created will be called “PERFCOUNTERS_” plus the name of the XML file and the current time (YYYYMMDDHHMMSS). In the case of my example, the name will be: PERFCOUNTERS_TemplateBufferManager_OBIWAN_20100306193300.

Of course, if you’re working with Jobs, then you’ll want to see which jobs are running:

To call the specific data-gathering Job, use ID or Name (which you have discovered using the command above):

To see if the job is running without errors, run the Receive-Job  cmdlet, and heed Marco Shaw’s (Twitter | Blog) excellent advice:

… when using receive-job, one may want to use the switch parameter -keep. Otherwise, any associated output is lost if receive-job is run again.

With the -keep parameter, the output is retained on the screen when you next run the receive-job cmdlet. So, with that in mind, our investigative command is now:

Alternatively, if we want to see all jobs used by the PerfCounters Module, we just need to return all jobs starting with “PERFCOUNTERS…“, so we can use where-object to find what we need:

And finally, if I want to stop the job before the date set in the Set-CollectPerfCounter command, I just need to type:


Figure 6. Finding out what state the PerfCounter jobs are in.

Uploading Data to a SQL Server Table

We can do this in one of two ways; the first method is bulk inserting the .txt file, and the other is, when the data is being gathered, to save it directly into a SQL Server Table. Let’s take a closer look:

Bulk Inserting

After you’ve run your data-gathering job, you will see that the .txt file is ready to be inserted into SQL Server using a simple T-SQL bulk insert, and the Save-PerfCounterSQLTable function will help you do that. The command below will upload the output .txt file, and create a new table to receive the data (using -NewTable switch parameter):

Alternatively, if you want to upload the output .txt file into an existing Table, simply omit the -NewTable  switch parameter and pass the target table name in the -TableNameparameter:

If you prefer to have a bit more control of your tables, it’s a simple matter to combine the two previously mentioned switches to upload the .txt file and create a new table to receive it, with a name chosen by you :

As you may have noticed, the PathConfigFile  and PathOutputFile parameters, which contain the full paths of the XML configuration file and output file respectively, are required. If you’d like to learn more about the Save-PerfCounterSQLTable command, use :

Saving  directly into SQL Server Table:

To best demonstrate this, let’s create a complete example. First, we choose the counters that we want to use. In case, this is the whole SQL Server Buffer Manager category on the Chewie and ObiWan machines, as well as the Processor category on Chewie. We save this configuration into C:\temp\BufferManager.XML and c:\temp\Processor.XML


Figure 7. The demo configuration files for Obiwan and Chewie.

Now, with the XML configured, we can now start the gathering of data using background Jobs and saving their output directly into a SQL Server Table. We do not pass the target SQL Server table name as a parameter, so one table will be created for each server, using the naming format of PERFCOUNTERS_XMLFileName_YYYYmmDDhhMMss. Even though you’re using a SQL Table as a data repository, you must pass the path to an output file into your command, because the output file always will be created. Why, you ask? Let’s say you lose your connection to the SQL Server repository; this way, you don’t also lose the data, because it will also be stored in the .txt file. In this case, as we using several different XML files, we’ll only pass the target path, without the file name, and will be creating output files in this location using XMLNAME_MACHINENAME.TXT as a naming convention.


Figure 8. Creating the jobs to gather data, and sending the collected data directly to a SQL Server table.

As you can see from figure 8, three jobs are created, and their names all start with PERFCOUNTER, so you can use where-object to quickly and easily find all jobs used by the PerfCounters Module:

Now, if we look in ObiWan’s SQL Server, we can see that there are three tables with the collected data: one for each machine we gathered data from (i.e. one for each XML file):


Figure 9. Investigating the gathered data in SQL Server.

… And the .txt files containing the same gathered output are created as well:


Figure 10. The gathered data, stored in .txt files.

When you’re setting this up, you can use your own table name to make things easy to find; I am passing the -TableName parameter with “BufferManager“, and so for each machine, a table called BufferManager_MACHINENAME will be created. In this case, BufferManager_Chewie and BufferManager_ObiWan:


Figure 11. Investigating the data stored in our custom-named tables.

Perhaps you’re wondering why it’s so useful to be able to specify the target table name? Let’s say you stop the gathering, and want to continue on a completely different day, but outputting the data to the same table. Just Pass the -TableName parameter with the name of the original table, and don’t pass the -newtable switch parameter:

Remember, in this case you have to explicitly declare the XML file and, in the case of the code snippet above, I will restart the data gathering to Chewie, with C:\temp\BufferManager_Chewie.xml as the configuration file, and the BufferManager_Chewie table as the SQL Server repository. The data is saved without creating a new table, and the output .txt file is always created as a data backup.

In these examples I use Windows Authentication, but you can pass the Username and Password as parameters. If you get stuck at any point and want some more information, just type get-help <FunctionName>  -examples.

Well folks, I hope you can get some  good use out of this module, as I am. Once again, PowerShell wins.

Top Tip

When I was doing this article, I accidentally created 857 tables in SQL Server, and I had to drop them. Is that complicated? Not at all – here’s a hint:

dir |% {$ _.drop ()}

…where % is an alias for a foreach-object.


Tags: , , , , , , ,


  • Rate
    [Total: 1    Average: 5/5]
  • Math

    Congrats Laerte
    I’d like to say a few things, I downloaded this module features the first time that Laerte posted on his blog. We found some problems and he promptly fix, answering all my emails quickly. Today I use this module to my 5 servers SQL Server and I can say it works very well. When Laerte says to contact him if you have any doubts, he tells the truth. In addition to extensive knowledge, is a striking simplicity and humility. Congratulations Laerte, including for your MVP. It was more than deserved

  • laerte

    Sweet Math. Actually None of this would be meaningless if it were not for people and comments like yours. Thank you my friend, I think that after much conversation we had via email can consider you as one. I am honored 🙂

  • Phil Factor

    Another Good Article.
    Lerte, my friend,
    Your enthusiasm is infectious. Keep these glorious articles coming.
    One day you must tell me how it is possible to accidentally create 857 tables in a database. It is one thing I haven’t yet managed to do!

  • laerte

    Thanks a Lot Phil. It’s always a great pleasure a comment from you. hahah, about the tables, just use this script. Actually it was to be 57 (to my tests), only typed 857 by mistake.

    “Compras” | % {
    $TableName = $_
    0..857 | % {
    $SQL = “Create Table tbl_$($tablename)_$_(codigo int, nome varchar(50))”
    Invoke-Sqlcmd -ServerInstance Vader -Database MundoNet -Query $SQL -SuppressProviderContextWarning


  • Phil Factor

    857 tables
    Wow! The power of PowerShell!

  • PH

    That’s is the guy.
    That’s is the guy.


    My friend.

  • pinaldave

    Great Article
    I have been always admirer of the articles of Laerte.

    This article is the proof why I appreciate his knowledge. A well known community contributor.

    Kind Regards,

  • laerte

    PInal, Such kind words coming from you My Friend ,a SQL Server Legend, make me more proud and honored. Thanks a lot for all support you gave and gives to me. 🙂

  • rreid740

    big thanks
    Great article!

    This has been on my mind for a while, never enough quiet time to think it out – thanks for sharing it!

    roger reid
    (who loves a good gui as much as the next guy, but prefers to leave eyes and fingers out of cron-type jobs…)

  • Felipe Santana

    Very good
    Laertes always showing the power of Powershell integrated with SQL Server.
    Great article, congratulations once again.

  • Barbosa

    Powershell Is Great
    I Love PowerShell.

    Thanxxx for this article.

  • Barbosa

    Powershell Is Great
    I Love PowerShell.

    Thanxxx for this article.


    RE: Gathering Perfmon Data with Powershell
    Looks great!, is it still available? Getting a file not found from the “the top of this article” link.


    RE: Gathering Perfmon Data with Powershell
    Looks great!, is it still available? Getting a file not found from the “the top of this article” link.

  • laerte

    DBA_DUDE, my friend the link is ok (in perfcunters on top of the article on the right). But if you can not, ping me at . In the next SQLPSX release (which should be ready in the coming weeks) will also be available

  • abdul samad

    Gathering perfmon data with powershell
    Hi Laerte,

    This article is just amazing and I want to congratulate you for writing such awesome script and great explanation.

    It really really helped me a lot, Thank you and great work Laerte.

    I am facing one problem while executing that script: When I am executing that script as job and I am executing it for 5 minutes with an Interval of 10 seconds,start time= 4 NOV 2010 at 6 PM(IST) and end time: 4 NOV 2010 at 6:05 PM (IST), below is the time schedule from the script in 24 hour clock.

    “11/04/2010 18:00:00” -DateTimeEnd “11/04/2010 18:05:00” -Interval 10

    The job is started successfully and running but it is NOT ENDING(stopping) at the specified -DateTimeEnd parameter, ie as per my settings it should stop after 5 minutes,but it is not stopping.

    I have to manually stop the job by executing the below script suggested by you:

    stop-job -id <job id>

    apart from that there are no issues in the script as of now.

    I will be very much thankful to you if you can help me in this regard.

    Abdul Samad

  • laerte

    Hi Adbul, thanks for the kind words my friend. I am very happy that you enjoyed. I’ll do some testing on the samples that you sent, but please, feel free to add me on msn and we can talk better

  • MortenD

    Thank you
    This is a really cool and powerful script.
    Thank you so much !!

    Unfortunately i’m not much of a developer myself and i ran into a issue gathering process counters (CPU and Private Bytes) on all processes to create a top x on the most consuming processes.
    The problem is that a process can exist on the time of creating the XML but once i start gathering a process might not be running.

    [$Increment].nextvalue() | Out-Null
    Set-CollectPerfCounter Error Detail :Exception calling “NextValue” with “0” argument(s): “Instance ‘splwow64’ does not exist in the specified Category.”[0]

    I bet some of you hardcore PS guys might have a workaround on that. I would really appreciate your help.

  • laerte

    Hi MontenD, thank you for reporting this issue. As we talked by email, I will create a new release in SQLPSX with the correction that we did.

    Cheers 🙂

  • MortenD

    Problem solved
    Thanks to Laerte
    I had my problem solved.

    Now the module can handle monitoring for instance system processes where the situation can occur that a process running at the time for creating the traceprofile is not running at runtime (once start gathering)

    Thank you for your time and help.
    I’m looking forward to future improvements of this GREAT tool

  • Zach.Skinner

    A few issues
    I maintain several instances which contain hyphens in their host names. Not all SQL statements that are built within the module encapsulate the database name with brackets.

    Additionally, there is a bug in the Get-ProcessPerfcounter function. When writing to a SQL table, the function does not update the current time to stop the loop. Specifically, the $now variable should be updated before calling continue on line 96.

  • Zach.Skinner

    A few issues
    I maintain several instances which contain hyphens in their host names. Not all SQL statements that are built within the module encapsulate the database name with brackets.

    Additionally, there is a bug in the Get-ProcessPerfcounter function. When writing to a SQL table, the function does not update the current time to stop the loop. Specifically, the $now variable should be updated before calling continue on line 96.

  • laerte

    a few issues
    Hi Zack, can you email me ?

    Thanks 🙂