If you don’t think curling up with the latest issue of the Journal of Econometrics is the recipe for a fun weekend, you may not have heard of the Sokal hoax. In the 1990’s, a mischievous physics professor at NYU was able to submit- and publish- many outlandish articles in legitimate academic journals.
One of them was about whether “dogs suffer oppression based on their perceived gender.” (For a quick dive into the controversy, check out this great piece from the Atlantic).
Because the articles were filled with jargon and mimicked politically correct themes, they were accepted without anyone applying common sense. My goal in sharing this is not to litigate another round of the culture wars or bash academics (my father is a PhD and I have taught at several universities). The recent increase in conspiracy theories and anti-science sentiment is sad, but that’s a whole other article for somewhere else. But we must still call out things that don’t pass muster regardless of who says it when it applies to our own IT industry.
Measuring success
As both a technology consultant and a business school professor, I encounter needless confusion and obfuscation that hampers efforts to truly improve organizations. I routinely talk to senior executives about metrics and not a week goes by that I am not perplexed at how some of them evaluate IT and/or measure its success. Every industry has its own rules of thumb and metrics that veterans can recite. How rigorously and often are those concepts ever tested though?
For example, I recently asked a CFO about her IT setup. She told me that many of her employees use personal computers at home for their work. She then confessed that she didn’t have a cybersecurity training program to help staff ward off the latest tricks hackers use to infiltrate networks. She said she wasn’t sure how many help desk tickets she gets on average each month, how long they take to resolve or what impact that had on their productivity. She said she had an annual rather than quarterly IT roadmap. Yet, when I asked what rating- from one to ten- she would give her IT Department, she modestly said a “9.5 out of 10.” I was successfully able to hide my surprise.
This case study reinforces why the right metrics matter and why they need to be assiduously evaluated. My “5.8 out of 10” could actually reflect a much better run IT department depending on which metrics I use and how I measure them.
Finding the right metrics
Finding the right metrics affects everyone. In the retail industry, for example, same store sales are often used as a key barometer of success by comparing sales in stores that have operated for more than one year, which makes a lot of sense. In advertising, marketers worry a lot about reach and frequency i.e. how often do people in a target market see a message. In baseball, true aficionados talk about WAR (wins over replacement) a proxy for the value each player adds to a team. Each of these has flaws, however, if viewed in isolation.
When I sold international advertising to large corporate clients for the Washington Post, the ultimate goal, for some, was not whether people saw the advertising and bought anything, but whether they took any action as a result of it like contacting a Congressperson or writing a letter to the editor. Airbus didn’t advertise with the Post to sell airplanes but to educate Congress on how many American jobs it provided. Furthermore, out of the Post’s million-plus audience, they really only were interested in the 10,000 or so who had any impact on Congress. In baseball, a good WAR still can’t identify the guy who always seems to hit that home run in the playoffs when the team needs it, even if he went hitless for the prior gazillion games. (Howie Kendrick: your grand slam against the Dodgers is still fresh in my mind).
The interplay between metrics is often just as important as the metrics themselves. No set is perfect and they need to be continuously modified to ensure they truly reflect what your organization really cares about.
Metrics that give you confidence
What are the metrics you use to judge the success of your IT department or IT partner (if you outsource)? Do those metrics holistically integrate with each other? For example, if you make the mistake of judging success primarily by how quickly your help desk resolves a problem, could it be that the long-term reason for these repeated issues is left untouched and therefore the incidents will simply sap future productivity no matter how low you get that ticket resolution time down? More troubling, are your biggest IT problems not even making it to the ticket queue because your employees are spending hours trying to fix things themselves? If you measure success by how long you have gone without a cyberattack, is that giving confidence in your security or are you just lucky that someone has not yet targeted you?
Organizations today are often too busy just keeping up with the latest technology, adapting to remote work or simply sourcing computers for new employees amidst the global supply chain crunch. Is it any surprise they don’t spend enough time figuring out which metrics to track and how they impact their particular goals?
Before you start planning your next IT strategic plan (you do have one, right?), spend more time vigorously debating which metrics you want to track and how they will impact your year. If you need help getting started on this exercise or refining what you have already done, schedule a time with me for a free consultation.
About the Author
Ray Steen is the Chief Financial Officer & Chief Strategy Officer for MainSpring and has been with the firm since 2014. With over 25 years of experience in strategy, consulting and communications, his expertise arms clients with the strategies, tools and resources to meet their mission. Ray is a proud dad and coach of 5 kids, a fantasy sports nut and bleeds for the Chicago Bears and Boston Celtics.