What is wrong with benchmarking software? Not much, you’d have thought. A while back, Laila Lotfi wrote an editorial on the need for a standard benchmark for Object-Relational mappers, such as Entity Framework and nHibernate. By how much do they really slow down database applications?
When the developers over at x-tensive.com, creators of the DataObjects.Net ORM tool, created a series of benchmarks much as Laila suggested, they published the results on the ORMbattle.NET website. Nobody could have imagined the resulting Brouhaha.
Derision has been heaped on the stated intent of the site, to provide an honest objective comparison of the performance of various different ORM tools for .NET. Instead, it has been portrayed as a “sleazy”, “classless”, underhand marketing trick using biased, inaccurate and unrealistic benchmark tests that were deliberately designed to show competing tools in the worst possible light.
There many complaints to the effect that the benchmark tests are useless, because the tool “should never be used in that way”. If it can be, it almost certainly will be; and it is up to the tool creator to make sure that it stands up as well as its competitors. Although some interesting points emerged about the “bulk processing” nature of the benchmarks, and the manner in which transactions were used, any “knowledge shrapnel arising from the explosion of ORM minds” was largely drowned in a sea of acrimony, personal affront and mud-slinging. I imagine the sight of such a brawl sent a chill down the spine of managers who may have been planning to use ORM technology.
This comes at an unfortunate time. The IT industry is increasingly coming to suspect that the performance and scalability issues that come from use of ORMs outweigh the benefits of ease and the speed of development. DBAs will point unfailingly at the poorly optimised SQL that these ORM tools often produce. ORM supporters often accuse developers of not understanding of their tool and its features. If the latter is true, then such benchmarks are doubly important. It is not easy to produce realistic and fair benchmarks, especially for complex ORM tools, but if the community engages and perseveres, meaningful comparisons can be achieved, and one can learn a great deal from the process. We are crying out for objective benchmarks and if the ORM industry itself cannot hope to agree on how to do it, then perhaps benchmarks will have to be imposed on them.
As always, we’d love to hear what you think.