Contributed by Steven Spear, Senior Lecturer at the MIT Sloan School of Management and at the Engineering Systems Division at MIT. Spear teaches the MIT Sloan Executive Education program Creating High Velocity Organizations. He is the author of The High Velocity Edge: How Market Leaders Leverage Operational Excellence to Beat the Competition.
The suppression of bad news, when consequential, is easy to decry. Yet, doing so is more common than we care to realize. Understanding its source and overcoming its occurrence is key to having enterprise dynamics supportive of agility, resilience, and reliability and, ultimately, great performance.
On March 29th, The New York Times reported, “China Created a Fail-Safe System to Track Contagions. It Failed." After getting ambushed by the SARS contagion in 2002, China planned to never get pandemic bushwhacked again. The country built a reporting system to quickly and easily pull reports from localities about disease and make it visible and actionable at regional and national levels. Sounds great in theory — see and solve small local problems when the cost to fix is less and lead times are generous.
So, what happened? Not wanting to be bearers of bad news, local officials supressed reports until the deluge was uncontainable. It's a tragedy. Given everything the public has learned about “flattening the curve” and exponential growth/spread rates, one can assume that even a little more lead-time would have had significant affect on the disease’s spread in China and beyond.
Before taking chip shots at China for being peculiarly unwilling to surface bad tidings, let’s recognize that such an aversion is too typical. Harvard Business School doctoral graduate Marcelo Pancotto did a study across plants that have what's known as andon cords—simple devices for shop floor associates to call out problems that were making work difficult. In one example, associates were pulling the cord regularly, more than once an hour, all day, everyday. In another, hardly a cord was pulled and not a problem was reported. You’ve guessed the irony. The plant with the frenzied cord pulling was the high quality, high productivity one. The plant with the least cord activity? Awful by about every measure.
Dr. Pancotto asked, why?
In the high performing plant, associates knew that when they called attention to themselves, a cascade of help was triggered, the first and typical responses being: “What’s difficult and what can we do to help?” In contrast, in the dregs plant, associates realized that, at best, there was no response. At worst, someone did showed up with the accusatory “What’s your problem?!” followed by the insistent, “Let’s get back to work.”
Another colleague, working a summer job years earlier in the same system, was tasked with putting “A OK” stickers on rear windows after final inspection unless he had problems to call out. When he let a few cars pass without stickers because of visible issues, he got called out by his supervisor for making trouble. When another car came by without a sticker, and he got chewed out again, he had to point out that the car actually had no rear windows onto which to affix the sticker. Finally, this colleague realized his job wasn’t calling out problems; it was putting stickers on the window. So, he found where in receiving the widow crates came in, he opened them and stickered all the available windows and spent the remainder of the week catching up on his reading.
Why does this problem suppression/reporting aversion occur? Here’s a theory.
When we first start an undertaking, we’re full of unanswered questions and problems that have yet to be resolved. What value are we actually trying to create, whose needs are we trying to meet, what combination of science technology and routines will be effective and efficient? We're happily in an exploratory and experimental mindset.
Once we converge on reasonable answers to those problems, our challenge is less to discover our way out of the darkness and more to ensure consistency and predictability. And once the institutional norms shift to reliability and predictability, then rewards accrue for those who “get it right.”
All well and good until our operating environment changes and we need to reengage those withered muscles once tuned for seeing what’s wrong as a precursor for making it better.
What's the real problem? With all systems, brand new and experimental or long standing and operational, things go wrong all the time. Most of those everything-all-the-time problems are small, distributed, and largely inconsequential. Those should be exactly the ones we pay attention to, as correcting for them is less effort,and there's ample lead time to act before they matter.
What happens instead? In the operational "keep it predictable" mindset, the little problems, slips, mistakes, and close calls get swept under the rug. It's not pathological. It's wanting to prove that things are still stable and reliable, despite the aberration.
What's the leadership call for action? It means making it safe for people to call out problems when and where they are seen, and responding with a "what went wrong; what can we do?" rather than a "keep it to yourself approach."
For examples of such leadership behavior, check out chapter 4 and chapter 9 in The High High Velocity Edge. For tools for supporting such a discovery dynamic, see our apps at See to Solve.