Thursday, March 5, 2009

The Art of Making Quality Decisions

Description: Making quality decisions is an intricate tapestry of experience, inquiry, and judgment that converge to form solutions. According to Michael Sacks and Steve Walton, professors at Emory University's Goizueta Business School, there are strategies that can be adopted to make the process more effective.


It’s challenge enough for individuals to make decisions about things they know a lot about. It’s even tougher for them to make decisions in the face of uncertainty or when the information available is being poked, prodded and packaged to influence their decisions.

Individuals also have biases in terms of the information they consider important—whether it’s what they want to hear, an allegiance to what is already known, or if the information comes from a source with whom the individual agrees (i.e. when it comes to political commentary, Republicans are more likely to turn to conservative radio talk show host Rush Limbaugh whereas Democrats might seek out the opinion of MSNBC news anchor and political commentator, Keith Olbermann).

“People often have pre-existing opinions and positions and they seek out sources likely to confirm those opinions,” says Michael Sacks, associate professor in the practice of organization & management at Emory University's Goizueta Business School. Sacks worked for the U.S. Government prior to the 1992 Presidential election. He watched two candidates use the exact same unemployment data but it in ways that validated their very different positions. “It’s neither incorrect nor inappropriate,” Sacks explains during a recent Emory Executive Education seminar on Leadership and Decision Making that he co-led, alongside Steve Walton, associate professor in the practice of information systems & operations management at Goizueta. While numbers are objective, the way they’re used is completely subjective. “We have to be smart and well trained as decision makers to pick that apart,” Sacks adds.

For instance, this past September, after a string of financial institution failures, a group of U.S. legislators penned a $700 billion “Bail Out Bill” in an effort to stabilize U.S. financial markets. But enough members of the U.S. House of Representatives were wary of the bill and it failed to pass the House. The bill’s authors reworked the bill and renamed it. On October 1st, the Senate passed the $700 billion “Economic Rescue Package.” A day later, the U.S. House of Representatives also passed the bill.

According to Sacks, the framing of information is just as important as the information itself and people respond differently based on how the information is framed. Republican Ileana Ros-Lethtinen of Florida voted “No” on the original version of the bill but was swayed by the newer model. The Associated Press reported that Ros-Lethtinen described the original version as “A bailout bill for Wall Street firms,” but the later bill, armed with “relief for taxpayers and hard hit families,” she said, was “an economic rescue package.” No doubt the “framing” of the second bill as a “rescue package” had some influence on the 58 members of the U.S. House of Representatives who switched their votes.

“How you frame [data] can make it less bad,” says Sacks. “You pick the information you want to emphasize,” Politics is one area where the power of framing runs amok. “Late term” versus “partial birth” abortion, a “surge” strategy versus an “escalation” strategy in the war in Iraq, a tax “cut” versus tax “relief.”

The power of incentive also influences decisions and is a type of framing. “People perceive risk differently if it’s a gain or a loss,” says Sacks, adding that it’s more painful to lose a dollar than it is pleasurable to gain one. If a manager wants his employees to make a change, he won’t get as much out of them if he presents an initiative or policy as leading to a potential windfall as when he frames not doing so as resulting in a crippling loss.

Individuals also have a tendency to base their judgments on information that is readily available to them, vivid and easily imagined. Better known in decision-making lingo as an “availability heuristic,” it influences decisions all the time.

If, over the course of a football game, one team’s kicker makes three field goals but misses a potential game winning field goal, fans will remember the one he missed—not the three that got the team to that point. Because the miss is more vivid and the one fans talk the most about, if asked to make a decision about whether or not the kicker were any good, people may decide “no” based on the vividly recalled miss.

When the D.C., or “Beltway” Sniper was on the rampage in 2002, some people drove to Maryland to get out of harms way. In fact, there was a far greater chance that they’d die in a car accident on their way out of D.C. than there was that they’d be killed by the sniper. “Widely publicized events get disproportionate weight. It’s the same in the workplace,” notes Sacks. “Ever had to go to a series of meetings over one strange fluky thing?”

Associating a decision with a preexisting category—better known as “representative heuristic” or stereotyping—is also common in the work place, and it can result in discrimination. If a manager notices that three employees who graduated from the same school are considered poor workers, he may predict that a current job applicant—with a degree from the same school—will not be a good employee either. “It’s about coming to a conclusion about people based on categories they may not fit,” Sacks says. “People aren’t aware they do this, that’s why is so pernicious. It’s really powerful stuff.”

According to Walton, people’s judgment is influenced not only by data and research, but by their opinions, history and experience—all of which are subject to bias. For instance, decision-makers often go with the first reasonable option available although it may not be the best, a common blunder political scientist Herbert Simon dubbed “satisficing.” “Sense-making,” satisficing’s decision-making cousin, is what happens when people make a quick decision and then, after the fact, rationalize why they made the decision.

Anchoring can also cause a decision-making blunder and occurs when individuals rely too heavily on a singular piece of information, or “anchor” when faced with a decision. Sacks pointed to a 1974 study by Amos Tversky and Daniel Kahneman where they asked people to guess the percentage of African nations that are members of the United Nations. Those who were asked, “Is it more or less than 45 percent?” answered differently than those asked if it were more or less than 65%. “People are very strongly affected by an anchor set up,” notes Sacks.

Why do more people go for a “Buy One Get One 50% off” deal than a 25 percent off sale (even though the amount saved is exactly the same)? Why is gas priced at $1.99 a gallon and not $2.00? “Even if the anchor makes no sense, people are likely to fall for it,” says Sacks. “Numbers are powerful, but that power can be dangerous.” Benchmarking is a form of anchoring. Is someone who got a GMAT score of 660 really any less smart than someone who got a 680?

Psychological accounting is another trick the brain plays on decision-makers. A couple that goes to an art festival and likes a piece of artwork but resists buying it because it costs $100 then spends $110 on dinner has been tripped up by psychological accounting. “We create these artificial categories and we perceive them differently. We respond to the category not the expenditure,” notes Walton. “It’s how it’s perceived versus how it really is.”

Psychological accounting can prove hazardous in the workplace. If a business unit is rewarded based on its performance but not based on the business as a whole, it can result in a misaligned company; one where business units work to better their performance even if it hurts another business unit and the company overall.

Escalation of commitment is another common decision-making mistake. For example, a group of friends arrive at a favorite restaurant and are told the wait is 45 minutes. After an hour passes, they approach the hostess and are told it will be a little while longer. In the meantime, a member of the group discovers that a new, well-reviewed restaurant just down the street has no wait. Does the group go to the new restaurant? A study that chronicles a similar situation showed that 80% of people would continue waiting. Why? Because if they left, they’d perceive the time they invested at the first restaurant as lost. By continuing to wait, note Sacks, they’re able to think there’s been value in the time they spent waiting.

“Organizations do it all the time,” says Sacks. “They have a strategy or a rule and they keep following it even if another course of action is preferable.” Years ago, evidence pointed to the superiority (and cost effectiveness) of using polyester to make tire cords, but the E.I. duPont de Nemours and Company (DuPont) decided to continue using nylon in its tires rather than reconfigure its factories. DuPont lost millions and eventually abandoned the tire business entirely.

“Organizations often run on inertia. Old policies take on a life of their own beyond their usefulness,” says Sacks.

More often than not, these biases appear en masse. “All of these things play together,” explains Walton. “And these things play badly with one another.” A leader can make an emotional commitment, then build an entire story via sense-making that rationalizes his or her commitment to a policy, a product, or a strategy, even if there’s evidence that things aren’t going well. And don’t always look to the data to help, says Sacks. Even qualitative data can be perceived distinctly depending on how it’s framed, collected and used in analysis.

That said, there are qualitative tools that can correct bias and many successful companies and leaders use them. Firms can use well-designed qualitative interviews and surveys to discover key factors about organizational performance. Focus groups can help managers analyze trends across groups, and qualitative data can be partially quantified using scoring models.

There are other fairly straightforward ways to reduce bias. To combat sense-making, leaders should seek disconfirming evidence. Assign a “devil’s advocate” role on a team and seek diversity of thought when it comes to team members. “It’s great to be a collaborative team member, but sometimes you have to push each other,” notes Sacks. “With a more diverse team, you’re more likely to have someone on the team who doesn’t buy in.”

To fight off “satisficing,” leaders need to push themselves and their peers past the first acceptable option to find a better one. Assessing policies to determine if changing is preferable to the status quo will help prevent the escalation of commitment, and challenging the accuracy of isolated examples will help stave off availability and representative heuristics.

There are, however, times when a leader or manager is asked to make a decision and certain circumstances are out of his or her control. In these cases, quantitative problem solving is an effective way to make a focused decision, and building a decision tree is often times a productive way of laying out the outcomes of decisions that deal with uncertainty.

Weather affects decision-making in some industries, but isn’t something the decision-maker can control. In September 1976, a storm approached California’s Napa Valley, home of the Freemark Abbey Winery and where William Jaeger, one of the winery’s partners, faced a decision: in light of the coming storm, should the winery harvest its Riesling grapes immediately or leave them on the vines? Jaeger knew that a storm shortly before the harvest could be detrimental, often ruining the crop. But a warm, light rain prior to harvest can cause growth of a beneficial mold, botrytis cinerea. This mold results in a complex wine highly valued by connoisseurs and that sells for more than double the normal price.

Walton uses this winery case study in class to teach executive students the process of building a decision tree. To be effective, explains Walton, leaders must determine the factors that influence the payoffs associated with a decision, assign probabilities for each event, and estimate the payoff that will occur if the event occurs.

In order to make his decision, Jaeger laid out the various cost and payoffs associated with a decision. Each branch of the decision tree was dictated by “states of nature,” or factors beyond the control of the winery. At the terminal end of each branch is an outcome and the probability that the outcome will occur. If Jaeger decides not to harvest the grapes, his payoff depends on whether or not the storm hits. If the storm does come, revenues depend on whether or not the mold forms. If there is no storm, the selling price of the wine will depend on sugar concentration levels. Once the tree was laid out, Jaeger used backward induction—or picking out the best decision at each decision node—until he arrived at the primary decision: not to harvest immediately.

It’s possible that a manager’s decision may be swayed by incorrectly assessing probabilities (i.e. What if the chance of rain for the Napa Valley had been 80%—not the 50% as reported?) But with sensitivity analysis, leaders like Jaeger can play with the numbers and determine the impact of different probabilities on expected value.

But even when leaders utilize the best possible quantitative decision analysis, many decisions, notes Walton, are filtered out at each step of the decision-making framework because of qualitative decision-making. Additionally, if the data collected isn’t salient, the decisions made based on it won’t be helpful and could be harmful to an organization. “You’re better off having no data than bad data,” Walton explains.

In order to make sound decisions, it’s important that leaders make sure the decision is correctly framed, that the assumptions surrounding the decision are evaluated, that quantitative and qualitative analysis is used when necessary and appropriately and that the data—and the analysis of it—is intelligently questioned before committing to a decision. “Even when you know about biases,” says Walton, “you have to act intentionally to overcome them.”

The Sacks and Walton team partner to teach in Goizueta’s globally-ranked Emory Executive Education. Critical Thinking: Making the Right Decisions Now is a part of Emory Executive Education’s Leadership Suite, aimed at emerging leaders who need to build tangible skills quickly while minimizing time away from the job. These interactive and experiential programs feature a balance of behavioral learning and tactical frameworks to hone skills that will impact performance immediately.

Critical Thinking—Making the Right Decisions Now is a highly applied and experiential two-day program that challenges executives with a sequence of exercises requiring participants to engage in thoughtful and strategic decision making practices. For more information, please contact Emory Executive Education at execed@bus.emory.edu or by calling 404-727-2200.

Photo: Professors Steve Walton, left, and Michael Sacks


Read more...