Benchmarks Don’t Tell the Full Story. Here’s the Real Deal.

I am just going to say it: good benchmarking is difficult. Proprietary solutions exist, but any security they provide in terms of an accurate sense of peer performance comes with cautious skepticism. Let me give you an example: workplace safety. 

Safety Data 

Both OSHA and BLS track workplace safety. Without boring you with details, there is much to be learned from their two approaches to fatality or DART (Days Away, Restricted, or Transferred) rates. 

  • Both stand up to scientific rigor. 
  • Both are comprehensive, covering a statistically significant number of industries and regions. 


But which you compare yourself to matters whether you are an OSHA required industry or not. 

  • One is self-reported in a survey and the other is based on third party inspections. 
  • If you want comparisons by company size or occupations, you only have one data source you can use. 


Across your portfolio, you may have different portcos with different DART comparison needs. One portco uses BLS, another uses OSHA, and yet another uses both. How you aggregate that at the portfolio level is no trivial matter.

Hoo boy. That’s a lot, I know. It’s no wonder PE firms are attracted to the promise of simplification. But is it oversimplifying when literal lives are at stake? 

When it comes to the benchmarking promise most IMM vendors make, they simply cannot be trusted.  

Why you can’t trust that proprietary benchmark

First, let’s be clear about something. Proprietary benchmarking is the problem here, not the practice of benchmarking. The example described above is not proprietary. Anyone is free to look up injury rates from BLS and OSHA data sources, and you can definitely find meaningful comparisons there. It might be hard to understand and mind-numbing to compute, but that’s why you would use Tablecloth. What can be simplified is the effort.

Proprietary benchmarking is the problem. And for that we have two branches:

  • Algorithmic, big-data solutions
  • Reportive, small-data solutions

Do you trust the algorithm?

Algorithmic solutions require massive amounts of proprietary data (think: register receipts, aggregated HR and payroll records, cell phone records and supply chain records, leasing and land use records; the list is endless). They use that data to build algorithms that interpret it and generate analyses that can help identify, for example, the likely carbon footprint of a company’s supply chain. Look, it’s fascinating work, and it’s done by a lot of smart people trying to solve for a very real need. But at the end of the day, it’s a black box. Even if their methodologies are shared, which some firms do, the proprietary nature of the algorithm itself necessitates that we trust their math without being able to verify it. You’re on shaky ground there. 

Do you trust your peers?

Reportive solutions rely on small data. Not only are the datasets restricted in size, but the scope of inquiries that can be made from them is also limited. What goes into reportive data? 

  • It’s self-reported. These are surveys, and they’re subject to the human element. 
  • The sample sizes are small. 
  • The time series is not long enough to judge trends.
  • Data points come aggregated. They are not calculated from operational data thus can’t be segmented for cross-cut analysis. 


On top of all that, there is no incentive to report honestly. Firms are asked to be truthful, and one has no reason to suspect otherwise, but these unregulated reports are often aimed at an audience more interested in checking boxes than conducting a thorough longitudinal study.

So what?

We often focus on the risks and shortcomings of different approaches, but the real issue is why we feel the need to benchmark in the first place. Simply highlighting the limitations of benchmarks doesn't address the fundamental question of their purpose. 

Consider this: even if you have a benchmark that you trust - whether it's proprietary or based on public data - and you believe it's the right one for you, why is that the case? What are your LPs asking for when they request benchmarks? 

LPs receive metrics from multiple GPs, each with their own reasons for choosing their benchmarks. Are they primarily interested in knowing whether you're in the top quartile for your chosen benchmark, or do they want to see that you're making progress in the right direction? 

If you're just taking the test to make the grade, what have you truly proven? The real benchmark should be your own performance over time. If you're consistently improving, learning from your experiences, and delivering strong results for your LPs, the only benchmark you need is yourself. 

Featured articles

The latest on ESG and impact for private investors.