Click here to monitor SSC

Simple-Talk Editor and Product Marketing Manager

Analysing and measuring the performance of a .NET application (survey results)

Published 8 March 2010 7:52 am

Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code?

What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance?

So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions:

clip_image002

question2

clip_image006

Amazingly, 533 developers took the time to respond – which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you.

First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, “Have a little faith Laila!”)

careperf_graph

When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler).

methodofanalysis_graph

The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you’ll only get method-level rather than line-level timings, and you won’t get timings from any framework or library methods you don’t have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place.

On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I’m missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements.

Finally, while a third of the respondents didn’t have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly:

“sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem”.

“they are simple to use, but the results are hard to understand”

“Complex to find the more advanced things, easy to find some low hanging fruit”.

These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results.

perceptionprofilers_graph

I found yet more interesting information when I started comparing samples of “developers for whom performance is an important part of the dev cycle”, with those “to whom performance is only looked at in times of crisis”, and “developers to whom performance is not important, as long as the app works”. See the three graphs below.

Sample of developers to whom performance is an important part of the dev cycle:

sampledevperfimp_graph

Sample of developers to whom performance is important only in times of crisis:

sampledevperfimpincrisis_graph

Sample of developers to whom performance is not important, as long as the app works:

sampledevperfnotimpaslongasappworks_graph

As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are.

So all in all, to come back to my random questions:

.NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers.

Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

3 Responses to “Analysing and measuring the performance of a .NET application (survey results)”

  1. Jay Cincotta GIBRALTAR Software says:

    Thank you, Laila, for conducting this survey and for sharing your results.

    Iā€™m one of the respondents for whom performance is an important part of the development process. And, like your results show, my three favorite tools are logging statements, performance counters and performance profilers (I”m an ANTS customer, btw!).

    I find all three useful because they provide complementary information. Performance counters are great for measuring performance at a macroscopic level. Logging statements providing important context about what the application is actually doing and provide essential non-performance-related information such as exception details. And performance profilers drill-down into the details of exactly where the time is going.

    As you point out, the challenges with logging statements are the effort to instrument applications and the tedium of analyzing all that data and extracting important information. I found this to be such a recurrent annoyance that I have spent the last few years building a company and product around an elegant solution called GIBRALTAR. It uses aspect-oriented programming to simplify instrumentation, automatically collects performance counters, integrates with multiple logging frameworks, and provides powerful analysis and visualization tools to make sense of the collected data. Best of all, the data can be continuously monitored from production fielded applications, not just in test.

    Professionals who care about the quality and performance of their software use a wide variety of tools. Performance profilers like ANTS are an essential part of the tool belt, but the best results accrue from integrating data a combination of tools that complement each other and provide a fuller understanding of application performance from multiple perspectives.

  2. Anonymous says:

    Interesting Finds: March 9, 2010

  3. randyvol says:

    Awesome analysis; quite thorough, you are to be commended !

    Having read the results, I’m left scatching my head then, as to why so many commercial applications tend toward sluggish/bad performance?

    I’m also struck by the fact that, at least here, when applications go ‘sluggish’ invariably we find them exhibiting the infamous “Not Responding” title bar – making it impossible to figure out what is going on.

    At this point in the age of the Windows O/S, one expects more granularity in explaining the status of a sluggish application other than a univeral “Not Responding” title bar.

    Perhaps .Net developers care more than other types of Windows developers?

Leave a Reply