I have always loved data. Daughter of an engineer, I am a strange combination of creative and analytical. Confronted by a problem, I start digging for information, facts, research, data . . . . But data is only as good as the design of the research that produced it–and there’s a lot about human behavior that can’t easily be answered by numbers, statistics, surveys or testing. Given today’s obsession with measurement, metrics and the idea that we can use data to make management decisions that have a real impact on human (and animal) lives, I am uneasy with the idea that data is the new Valhalla of productivity and profitability.
That’s because human beings are infinitely variable. They often don’t really understand their own reasons or motivations, have inaccurate ideas about their own performance and behavior, and while they may think they are answering truthfully, they can’t always know what they would do–or actually do–in given situations.
Focus groups are a good example. For decades, focus groups were a foundation of marketing research, until it became apparent that people’s answers in a research setting were mostly unrelated to they would do in the real world. If asked if they liked a product well enough to buy it, many said yes. But in the real world, they often didn’t buy because there were so many other variables–competing products, other obligations for resources (bills), emergencies, and the demands of family, work, charity, etc. Over time, researchers began to develop different forms of “focus groups” which gave more reliable responses.
People will also do things they could never imagine themselves doing–a fact revealed by two famous studies done in sociology dating back to the 70’s. The first was a test of someone’s willingness to punish another test subject’s wrong answers with apparently painful electric shocks. The second was the infamous prison guard study, where fake “guards” quickly became abusive toward fake “prisoners” who were in reality their college colleagues.
In both cases, few if any of the participants would probably have said they could imagine demonstrating these behaviors before their experience in the tests. So it’s clear that we can’t even predict our own behavior, let alone others, based on on self-reported data. So what about other kinds of data?
While I love “concrete” data based on quantifiable measurements, I know that data of any kind, in and of itself, isn’t an infallible tool. The biggest challenge lies in the design of collection and the interpretation of the results. Then there is the question of how people apply the data. I’ve seen some people apply a particular bit of data to a question for which that data is actually not terribly relevant or meaningful, or leads the reader astray from the real question–which is sometimes the presenter’s purpose. It offers a superficial view and/or falls in the category of logical fallacy. You can train people to understand the concept of logical fallacies, but many of them are not easy for many people to spot–and even if you’re used to looking for them, all of us can sometimes be sucked into their apparent relevance.
The best way to solve a problem is to target the root cause. But data doesn’t always reveal the root cause easily. Data may tell us what is happening, may even tell us when and where, and even how. But it’s unusual to find good data that digs down into why something is happening. And why is the one thing that makes the critical difference.
I’ve worked on instructional design projects with clients who told me the fundamental problem was that “employees won’t do what they are supposed to do” despite previous training. They were certain that if they subjected the employees to repeated training, at some point, it would stick. As I explored the situation with them in discussion, I heard clues about people’s attitudes, frustrations, relationships, interactions, conflicts and feelings. I began to suspect the real issue was that managers and supervisors needed as much or more training than their employees.
That’s not surprising. Buried within every story of training or teaching are clues to human behavior that won’t show up clearly on surveys, performance indexes, studies, etc. They are hidden in the “why” factor–or sometimes in what one of my favorite social psychology professors called the “Y” factor.
The Y factor is something that connects one behavior with another, but which isn’t clearly visible. Instead, only the most visible elements (think of them as X and Z) get linked together, posing some end assumptions that really make you scratch your head.
He told the story of a village in England that had more storks nesting on house chimneys than most other villages. (Note: chimneys in England are generally constructed a bit differently than in most parts of the U.S. and central heating was not as common.) Curiously, the same homes where more storks liked to nest were also the same homes that had more children!
While it looked at first as if the old folklore that “the stork brought the baby”was true, something else was clearly in play. Researchers started to examine the “data” to consider what the real link between these two facts was. A bit of logic soon revealed the truth: houses with more children had more rooms, meaning more fireplaces and chimneys, which were in more frequent use. (Lots of links there.) The storks liked the warmth, and gravitated to those houses. (Which led to all kinds of other intriguing questions like, is this how the folklore started?)
The initial “data” just provided the facts. But how the “dots” are connected is what really matters.
Clearly, if your people aren’t doing something the way you want, it might be skill or knowledge based. But it might also be process or equipment based. It might also be simply that people are reacting to something not actually related to the observed behavior–whether consciously or unconsciously. It might be that a procedure doesn’t make sense to those who are closest to the actual process, or the time required is burdensome, or the process appears to have no benefit (and it’s possible it actually doesn’t)–or a host of other factors. It could be a lack of communication, or a dislike of something or someone. It can even be something as simple as not enough time to do what’s demanded, stress factors that distract, or competing priorities.
Some of these will show up on measuring systems. But many organizations often have systemic obstacles that are buried under tradition, a need to control, a lack of trust, limited equipment, confusing scheduling and a vast array of other operational elements–all of which have a direct impact on those who must function within them, but a less direct impact on those who see only the top layer.
If someone is already focused on one area in need of correction, like getting workers to do what they’re supposed to do, it’s unlikely they will find it easy to step back and ask . . . could this actually not be solely the result of the three obvious links–workers, the task and the behavior? There are ways to dig for the root cause of a problem, but the real “why” is often a bit more challenging. The “Five Whys” process works well . . . but even that can lead you down a garden path if you aren’t careful. One false answer along the way makes the final answer inaccurate.
In the case of my clients whose employees “wouldn’t do what they were supposed to,” I listened and asked a lot of questions. What I heard often made me suspect that in addition to employee training . . .
a) managers needed assertiveness training,
b) managers (and the organization) needed to make use of employee experience to find a better way,
c) someone needed to make the case for why this process was essential by communicating the bigger picture to everyone (or be ready to discover there wasn’t a really good reason for the process),
d) the managers knew it was not the best system, but they were afraid to tell senior management . . .
All of these would require different approaches to training –including who to train, and what to train.
Unless your data collection is designed to deal with this level of discovery, you may miss the mark on solving the problem–and in the meantime, drive your employees (at all levels) crazy.
Don’t get me wrong. I love good data. Just be sure that your data is actually designed to target the root cause–and take into account the very hard to pin down human elements and issues that aren’t as easy to quantify but can be just as important.
And look for the hidden “Y” factors that could keep people from knowing, or at least willingly acknowledging, what’s really behind a performance issue.
© Chanda K. Zimmerman, 2012-2020