Archive | November 2014

Data, Thinking, & Wisdom

After learning the pitfalls of relying on common sense  and our shortcomings in using heuristics for judgment, I am left with, “now what?” Knowing all the foibles doesn’t automatically, or easily, propel us to temper our old tendencies. Neither does the newfound awareness immediately lead us to locate newer and better tools with which to navigate our daily lives, and more importantly, to make weighty decisions. The tension between relying on intuition and researching for facts and data will always be there. Intuition isn’t always faulty and data don’t always inform better outcomes. Indeed, at the end of our quandary, we usually have to fall back on “making that judgment call.”

I am a firm believer of evidence-based management; however, I am dismayed whenever I hear cases of managers using data at the expense of humanity. The escalation of pushing Black Friday shopping to Thanksgiving Day is such a case of allowing data, here competing for profit, to supplant our higher inclinations (instead, we demonstrate with abandon our baser aspects, elbowing, tussling, shouting or fist cuffing at the bargain sites). More and faster data mean that we have to absorb more information more quickly and make decision even faster as well. So, in a sense, we are where we have always been…relying on “common sense” and “heuristics.” Regardless of how much better our tools are in aiding our decision-making, the quality of the outcomes are still going to follow a bell-curve distribution, with half of them falling below average. Even if we can shift the whole bell curve toward more informed and better quality decisions, the bell shape of the distribution remains the same.

Yet, yet…a friend reminds me, “I’d rather have someone who decides with data than without data.” I have to concur, 100%.


Daniel Kahneman’s Thinking, Fast and Slow, offers some clues and tools for improving our decision-making abilities. In his analysis, “system 1” handles the quick decisions using intuition based on years of experience in pattern recognition.   And “system 2” gets to mull over bigger and important matters whenever we confront something unfamiliar or complicated. This way of thinking has always been the same. However, whenever we improve upon system 2’s reservoir of knowledge, we also help system 1 in producing better quality decisions. Put it another way, we accumulate the knowledge and improve the quality of experience for system 2 so that system 1 can produce more accurate assessment and wiser judgment

You might have heard of this story about firefighters. As a group of firefighters started hosing down the kitchen where fire was most intense, the captain heard himself yelling, “Get out of here.” The kitchen floor collapsed within seconds after the firefighters vacated. It turned out that the origin of the fire was in the basement beneath the kitchen. In reconstructing the scene, the captain realized that the fire was quieter and hotter than usual. However, at that moment, he didn’t have time to process such analysis; his “sixth sense” propelled him to yell out. Herbert Simon, one of the most prominent social scientists, summarized such sentiments, “Intuition is nothing more and nothing less than [pattern] recognition.”

In most cases, experts’ intuition has a solid foundation. It is only when they get carried away with their own opinions (and egos) that they lead us into mischief. When experts’ opinions diverge widely, or contradict each other, that’s a warning sign.

As data become bigger and more complicated, and managers need to make decisions ever faster, the risks seem higher. Paradoxically, as risks get higher, managers are even more reluctant to make judgment calls when they are most needed. If managers make “a good call,” such as the fire captain of the aforementioned story, we celebrate their intuition, if not, we criticize them right away. Collectively, we don’t stop and ponder, gather and process information, and allow some risks any more.

Relying on others’ opinions is metaphorically like relying on those who can see better in the dark than ourselves. The ability to see in the dark is an ability that may be more natural in some than in others. To push the metaphor further, we would be better equipped by learning from the blind (or eye-sight challenged?!). In real terms, it is about learning from as wide a range of people (their skills and knowledge) as possible. That’s what true leaders would do…learning from people thinking and operating outside the box so that they themselves can think outside the box eventually. In other words, true leaders boost system 2’s capability to increase system 1’s reliability…a principle for everyone.

Thanksgiving is upon us. I wish you a safe, peaceful, and joyful holiday. I will resume in this space on December 7th.



Till then,

Staying Sane and Charging Ahead.

Direct Contact:

What Riots Can Teach Us

“Hows” always intrigue me. Learning about learning; how we behave differently in different contexts (how do we know?); or, how to think…

Mark Granovetter’s The Strength of Weak Ties, the seminal work on social networks, was the foundation of my own PhD dissertation. However, only recently from reading “Everything Is Obvious did I learn about Granovetter’s “riot model” and how we rationalize the motivation behind baffling collective behavior. Like many people, I don’t always realize what motivates myself (I can usually find plausible explanations with 85% confidence); I also cannot always unfailingly detect how another individual (even a close friend) is motivated. And we certainly do not know how a crowd is motivated. Yet, we sure can come up with brilliant analyses of crowd behavior.

A close up.

A close up.

Granovetter’s “riot model” deserves a close look.

In Granovetter’s hypothetical scenario, we first imagine a group of 100 students in their college town, protesting, for example, the proposed hike of government fee. While frustrated and angry, the students are perfectly willing to listen and have a dialogue. Still, they are prepared to “take action” if necessary, and some are more ready than others. Further, every student has different bases for determining how much risk s/he is willing to accept: financial background, degree of vested interest in the political process, a chance to get on TV, comfort with physical engagement (or, violence), and a myriad of others that we might not think of. And even if we can come up with many other reasons, how valid are they? Some students are going to be crazier than others, and they might be the first few that start throwing or smashing things. Lastly, in a potential or an actual riot situation, even ordinarily sane people might behave in ways that they normally would abhor.

Now, let’s also imagine that every single student has a different “threshold” for violence. Andrew Watts, explaining this “riot model,” defines the threshold in this scenario as, “a point at which, if enough other people join in the riot, they will too, but below which they will refrain.” Some people need only 1: just one person shouting and throwing things is enough to push these people to join the action. People with such a low threshold are the “rabble rousers.” Others who might perceive more personal risk in the potential riot would have a higher threshold of say, 25. And still others would have a threshold of 2, 3, or 10, etc

In his “riot model,” Granovetter posits for this crowd of students a particular threshold distribution, in which every one of them has a different actual threshold, from 0 upward. In this example, the first “crazy” one would start throwing and smashing things initially because s/he has no threshold, then the one with threshold of 1 would immediately follow, and the next one with threshold of 2 joins, and before you know it, a full blown riot ensues.

Let’s stretch our imagination even more. Suppose in a different college town, another 100 students are protesting the same issue. Let’s assume that these students’ backgrounds, family, financial and everything else, are identical to the students of the first town we just visited. Everything is much the same except for one little difference in the “threshold” distribution. For this group of students, no one has a threshold of 3 and two students have a threshold of 4. To outside observers, these two groups of students look and behave – before they start acting on their frustration – as similarly as can be perceived. But for this second group, there is insufficient momentum for a full-blown riot. After the first “Ms. Crazy” starts throwing things in this second group, the threshold 1 and 2 students join in. Then, things fizzle out. No one would follow since there is no one in the group with threshold of 3.

As outside observers, we want to puzzle over the “reasons” for the differences between these groups; we want explanations. We postulate all kinds of motivational possibilities, from students’ temperaments, local business’ reactions, some particularly inflammatory comments by certain people, and on and on. They all sound reasonable. Yet, the only difference is in the threshold distribution. Had we known about the two different threshold distributions, might we be wiser? The only way we could have known about the detailed differences of threshold distributions, where the second group skips the threshold of 3, is by being intimately familiar with each and every one of the 200 students involved — how they interact with each other and what would tip each one into violent action. In reality, we can never really know such detailed information about a population, or even a moderately sized group of people.

A smashed object can be interesting...from an old car with shattered windshield.

A smashed object can be interesting…from an old car with shattered windshield.

Like everything worthwhile in life, the detailed journey takes a lot longer to achieve the outcome. So I have taken most of the space today on this theoretical example to make these points: 1. This is a hypothetical situation, yet, the lessons are important. 2. Even a clinically scrubbed and hypothetical case stretches our understanding. What does that say about our reality, where all social situations are so very much more complicated? Somehow in our daily life, we seem to feel sure about our analysis and conclusions, and confidently make subsequent policy decisions. 3. We shouldn’t automatically assume that which motivates us is the same as what motivates others. If we can’t ever wear another person’s skin — and imagining ourselves in that person’s shoes is really an inadequate substitute – perhaps at least, we should try to listen more carefully? Of course, that’s much easier said than done. Does any school offer a course on “how to listen?”

Ultimately, this feels like a philosophical problem…so I will continue reflecting in the next post. Till then,

Staying Sane and Charging Ahead.

Direct Contact:

No Escaping From Our Biases

However, we can learn and reduce our biases.

When we make a decision based on our intuition, we are prone to be biased. Even when we are given all the warnings of a potential trap, all the principles of the “right” logic, or factual presentations, we still let biases slip through. This is true for both the general population and for “experts” in many fields. Since there is no such thing as “perfect” information with which we make decisions (besides, how does one define “perfect information?”), it is inevitable that we need to rely on our heuristics, or rules of thumb, sometimes. The challenge is: How do we assess our own biases.

To start, we can learn the typical biases that reside in all of us. Daniel Kahneman’s and Amos Tversky’s work on decision-making process, and more importantly the underpinning irrationality, was ground-breaking. Tversky passed away in 1996; Kahneman won the Nobel Prize in economics in 2002. Kahneman’s body of work has been regarded as the foundation for “behavioral economics” where psychology is incorporated into understanding our economic decisions.

fall color 1

In their seminal article, published in Science (1974), “Judgment Under Uncertainty: Heuristics And Biases,” the two authors listed three major categories where our biases run deep: Representativeness, Availability, and Adjustment and Anchoring.

In a nutshell, when we assign a person to represent a whole group based on some preconceived notion, i.e. a stereotype, we make the “representativeness” error. For instance, when we encounter a quiet man, wearing dark-rim glasses in unfashionable attire and speaking with little eye contact, we may think he is an engineer, an accountant, or a librarian. In one of the many experiments Kahneman & Tversky’s conducted to test this hypothesis, they used this statement for their test: “’Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.’ Is Steve more likely to be a librarian or a farmer?”

Most of us probably cannot shake off our initial reaction that Steve is likely to be a librarian even after we learn that there are “more than 20 male farmers for each male librarian.” And farmers probably spend less time interacting with people than average librarians.

Using “availability” bias, we evoke what we remember most – the images or memories that come to our mind immediately and quickly – as the yardstick by which we make a judgment. By Kahneman’s own admission, for quite some time, he believed that politicians were more likely than, say, doctors or lawyers (yet to become politicians), to commit adultery. How many of us feel the same way? The fact is that we know more about politicians’ transgressions because they get reported more often, hence more available to us to recall. Another example, we are more jarred to see images of houses being burnt down than just reading about the incident. Perhaps in this case, the visual impact may cause us to be more vigilant about fires and our own situations? So, sometimes, biases serve us well?

In “adjustment and anchoring,” we let our initial encounter sway how we see the future outcome. Or, we give a quantitative estimate before we even know how to go about assessing the event quantitatively. Here is a theoretical example:

Give an estimate, in 5 seconds or less, the product of


Close your eyes.


fall color 3

Now give an estimate, in 5 seconds or less, the product of


Or, better yet, try it out on two friends, or two groups, then, compare the estimates. Inevitably, people overestimate the statement that begins with “8” than the one beginning with “1.” The median for the former is 2,250 and 512 for the latter. Huge, no? The actual answer is 40,320.

How about this: Did Gandhi die at the age of 114? If not, how old was he when he died? Compare asking the same question another way: Did Gandhi die at age 35? If not, how old was he when he died?

So, you say, “That’s all right, it’s just numbers and a test. In real life, we know better.” Then, let’s look at purchasing a house. The initial asking price definitively influences how we perceive the quality of the house. We would consider the same house to be of higher quality if the asking price is high than if the asking price is low. Kahneman and Tversky had conducted countless experiments testing this “anchoring” effect, and the findings have been reliable and robust. There was an experiment done on grocery shopping. When “limit of 12 per person” is imposed, compared to “no limit per person,” people buy twice as many of the advertised product. Even judges have been shown, repeatedly, to be influenced by anchoring effect (well, some of us know that judges aren’t really truly totally honestly objective).

All this is not to belittle humanity. As I said earlier, Kahneman and Tversky demonstrated again and again, that even experts, including statisticians and including themselves, are prone to these judgment biases. I resonate deeply with what Kahneman lays out in his latest best-selling book, Thinking, Fast and Slow: 1. By pointing out our innate flaws, it doesn’t mean that we should give up. 2. However, given that it’s much easier for us to detect in others their logic flaws than catching our own (just human nature), we can gradually elevate our understanding of the sources of potential biases by observing others. 3. Hopefully, over time, we will learn from detecting the biases in others to eventually lessening the biases in our own judgment and decision-making.

fall color 2

This entry is wholly insufficient to describe the brilliant work of Kahneman and Tversky. For interested readers, you can check out the latest Thinking, Fast and Slow. Or, for those who’d prefer a drier and more academic set of articles, go for Judgment under Uncertainty: Heuristics and biases, edited by D. Kahneman, P. Slovic, and A. Tversky.

Till next time,

Staying Sane and Charging Ahead.

Direct Contact:

Counter Common Sense

As long as we employ “common sense” to guide our own actions, we can’t really go wrong because there is an almost infinite amount of commonsense advice for us to deal with our daily situations. If some of these commonsense-guided actions seem inconsistent, so be it; life goes on. Similarly, when we employ common sense in decisions that would impact large numbers of people, we can usually find some commonsense explanations to cite when we are confronted with criticism. In a way, common sense becomes the shield for our hubris. Politicians of any stripe can always find commonsense explanations that appeal to their supporters however much they evoke disbelief in their opponents. Managers can usually justify their decisions to their peers and management superiors, but not to others whose lives are most impacted.

As I mentioned at the beginning of this “common sense” journey, one of the major problems with using common sense to predict others’ behavior is that we inevitably assume too much. Erroneous assumptions on a large scale lead to all kinds of unintended consequences. Andrew Watts suggests that instead of using the old but not trustworthy “predict and control” model, we may want to consider switching to “measure and react.” As Mr. Watts points out in Everything Is Obvious, lay people’s prediction is often no worse than the “experts’,” and frequently the layman and expert are equally wrong. If the prediction is off, then the planning control elements that follow the prediction are predisposed to go awry.


Just because we can’t confidently predict most complex systems doesn’t mean that we can’t use probability to help make decisions. Of course, we still need to understand the nature of the phenomenon which we are confronting. It’s one thing to plan for social behavior that happens with regularity, such as flu season or holiday shopping; it’s another to plan for known but infrequent phenomena, such as impact from a category 4 hurricane or the impact of the “ice bucket challenge.”  Seriously, who had predicted the success of the “ice bucket challenge?”

In addition, we should be cautious about reliance on “expert’s” opinions. Why? Watts explains that it is because we largely consult “experts” only one at a time. We would be much better off relying on polls of many people, experts and non-experts (or, no experts at all), for input. Not only do experts cost more; they also tend to advocate employing more sophisticated models for “better control.” In Watts’ many experiments and reviews of others’ experiments, we learn that in trying to make predictions, simple models do just as well as the more sophisticated ones. Or, the more sophisticated models don’t bring enough return on the investment for all the additional information you have to acquire (at a cost, of course). Watts uses the example of sports games. The key factors for predicting which team might win are whether it’s a home game and what the historical data tells about the teams. All the additional nuanced information helps only just a little, not enough to make any significant difference.


With experiment, “measure and react” approach would allow (especially for organizations) more immediate information on what the next step should be and how to implement it. For example, a company can do advertising in one geographic area or to one demographic group, and compare with similar markets. Of course, not every decision allows conducting experiments; imagine launching a military surge in one town but not in other towns. Watts offers these additional principles: “local knowledge,” “bright spot success stories,” and “bootstrapping,” and they are all connected. Local knowledge incorporates more accurate information and focused skills to tackle specific problems. In other words, one size cannot possibly fit all. “Local” personnel would have much better grasp of who to contact, for what resources and how much, and where to focus the resources, etc. Most of the issues that organizations face are not likely to be brand new, thus it’s efficient to look for ideas that have been tried. But don’t just copy; by studying other ideas closely you can see how they can be adapted to your needs. Underlying all this is the notion of “humility.” Watts quotes William Easterly,

A Planner thinks he already knows the answer; he thinks of poverty [or whatever issue] as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors…and hopes to find answers to individual problems by trial and error…A Planner believes outsiders know enough to impose solutions. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.

Watts further drives home with this observation, “[Planners] develop plans on the basis on intuition and experience alone. Plans fail, in other words, not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them.”

Of course, what I’ve been presenting in this space is based on my “expertise and experience” which is likely to commit the same commonsense fallacy even as I have been learning from Mr. Watts. So, I strongly suggest that you read Everything Is Obvious: How common sense fails us for yourself.

Till next time,

Staying Sane and Charging Ahead.

Direct Contact: