Wednesday, April 29, 2009

Goals Gone Wild': How Goal Setting Can Lead to Disaster

Subject: Management, Organizational Behavior
Author: Maurice Schweitzer, Max H. Bazerman, Adam Galinsky, Lisa D. Ordóñez
Source: Knowledge@Wharton
Description: Despite evidence that ambitious goal setting can hurt productivity, damage a company's reputation and violate ethical standards, its use has become endemic in American business practice and scholarship, even spilling over to the debate on how to improve America's public schools. A new paper by Wharton operations and information management professor Maurice E. Schweitzer and three co-authors documents the hazards of corporate goal setting and concludes that it is overprescribed.

The rush to roll out the Pinto had lethal consequences. Common-sense safety checks took a backseat to meeting Iacocca's deadline. In particular, engineers failed to examine the decision to place the Pinto's fuel tank only 10 inches behind the rear axle. When the Pinto was rear-ended, it often went up in flames. Fiery rear-end crashes caused 53 deaths, numerous injuries and a string of costly lawsuits.

It was a valuable lesson about the hazards of setting goals. In pursuit of such mandates, employees will ignore sound business practices, risk the company's reputation and violate ethical standards. This lesson, however, has not been absorbed by corporate America. To the contrary, ambitious goal setting has become endemic in American business practice and scholarship over the last half-century. Goals have pervaded industries as diverse as automotive repair, banking and information systems, even spilling over to the debate on how to improve America's public schools.

Yet new research by Wharton operations and information management professor Maurice Schweitzer and three colleagues documents how corporate goal setting can cause more harm than good. The paper, titled "Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal Setting," was co-authored by Lisa D. Ordóñez from the Eller College of Management, University of Arizona; Adam D. Galinsky of the Kellogg School of Management at Northwestern University, and Max H. Bazerman from the Harvard Business School. Their work appears in the February issue of the Academy of Management Perspectives.

"We take a strong stand in this article, because we are pushing against the pervasive use of goal setting in practice and a very large body of literature that has endorsed goal setting. We argue that managers and scholars have grown complacent in their endorsement of goal setting ... often [neglecting] the harmful effects," Schweitzer says. "We argue that goal setting is wildly over-prescribed."

The paper is full of cases in which goal setting had negative and sometimes disastrous consequences for a company. Indeed, executives and business experts in those cases frequently failed to realize the prominent role that overly ambitious targets played in causing the eventual problem. One famous case that Schweitzer and his co-authors relate is the storied 2002 collapse of the energy-trading giant Enron. They cite literature noting that the once high-flying Houston-based firm used goals and an incentive system for its salesmen that was based solely on the volume of revenue that they generated -- and not whether the actual trades were sound or profitable -- which became a key factor in Enron's implosion.

The authors found that goal setting has become practically institutionalized in American corporations, backed up by a persuasive body of literature over four decades arguing that employees perform better when challenged to meet specific targets as opposed to asking them to simply "do their best." The leaders of this movement are two renowned organizational psychology experts, Edwin Locke of the University of Maryland and Gary Latham of the University of Toronto, who wrote: "So long as a person is committed to the goal, has the requisite ability to attain it, and does not have conflicting goals, there is a positive, linear relationship between goal difficulty and task performance."

Schweitzer suggests that goal setting has become so ingrained that the practice is greatly overused. "We argue that there are some contexts where goal setting is appropriate, such as when tasks are routine, easy to monitor and very easy to measure. In practice, many domains are ill suited for goal setting."

'Mistakes Did Occur'

One well-known example took place at Sears, which in the early 1990s set a specific sales target for its auto repair staff of $147 per hour. In order to meet management's goal, however, mechanics began to perform unnecessary repairs or overcharge customers, which triggered a major customer-relations crisis for the giant retailer. Edward Brennan, chairman of Sears at the time, later admitted that the "goal setting process for service advisers created an environment where mistakes did occur."

Why does this happen? Schweitzer and his co-authors identify a series of problems that they say are linked to the overuse of goal setting, especially when the targets are either too specific or too challenging. For example:

* Goals that are too specific often lead employees to develop such a narrow focus that they fail to recognize obvious problems unrelated to the target. According to the authors, highly specific goals may cause workers to sacrifice safety for speed -- as in the case of the Ford Pinto -- or pursue misguided end results, as was the case at Enron. A typical problem is the sacrifice of quality in the interest of quantity, they note, citing the example of universities that require tenured professors to publish a certain number of research papers in particular journals, but without careful scrutiny of the quality of the work.

* Likewise, too many goals have what the authors consider an inappropriate time horizon. They refer to the well-known example of managers who are pressured to meet quarterly earnings goals, causing them to ignore long-term strategic problems. The reverse side of this practice is that employees also have a tendency to ease up when goal horizons are set too low. The paper cites a 1997 study of New York City cabdrivers who found that on rainy days, taxis tended to disappear from the congested streets because drivers met their fare target early in the day and went home, rather than working longer hours to make additional income.

* Workers with highly specific and ambitious targets will engage in risky practices in order to meet them. The authors note the case of one of the nation's largest banks at the time, Continental Illinois, where in 1976 the CEO issued a mandate to dramatically expand the loan portfolios to match those of some rival banks. The bank aggressively pursued new loan customers and even bought packages of high-risk mortgages from smaller banks, which eventually caused Continental Illinois to fail.

* Unethical behavior is one of the more obvious pitfalls of overly ambitious goal setting, with potentially some of the most catastrophic consequences. This can happen in a number of ways -- such as the safety shortcuts at Ford or the bilking of auto-repair customers at Sears. The authors also note incidents where employees offered bogus results to claim that a target was reached, such as when employees falsified sales reports to meet their quota at the vision-products company Bausch & Lomb.

The irony, says Schweitzer, is that a lot of this specific goal setting is unnecessary. Research has shown that employees have a stronger intrinsic motivation to do a good job than their managers tend to give them credit for. He points to research by Stanford University organizational behavior expert Chip Heath, who "found that people tend to think that other people need extrinsic rewards more often than they really do.... To us, our work is interesting and meaningful, but we tend to think that other people come to work because of money."

Beware the 'Hedonic Treadmill'

In fact, the authors argue that this failure to recognize the value of simply doing a good job can cause managers to instead set goals and rewards that harm intrinsic motivation and place employees on a "hedonic treadmill." The notion of a hedonic treadmill, says Schweitzer, "is that people never 'get' to where they are going. For example, people constantly pursue happiness, but don't get there. They keep thinking that the next promotion, the new car, the salary raise, etc. will make them happy. They get the promotion, and that makes them happy for a time. Then they adapt and mistakenly think that it's the next promotion that will make them happy.

"People may be motivated by goals. But these goals can crowd out intrinsic motivation, so they will need more goals to motivate them in the future."

Schweitzer and his co-authors point to other negative consequences from overly specific numeric goals. For example, workers tend to lose their focus on learning new skills in favor of using tried-and-true methods to meet their quotas. In addition, companies that set targets for individual workers can create a culture of competition in which workers tend to shun teamwork in problem solving.

Despite all this, the use of goal setting has spread to other areas outside the corporate world. Arguably the best-known example is the federal education program known as No Child Left Behind that was enacted in 2001; it links government aid to highly specific performance targets for students based upon standardized test scores. Critics of No Child Left Behind say the program forces teachers to focus narrowly on what will be asked on those tests, ignoring other critical skills. There have also been several scandals involving falsified test scores and other forms of cheating. Indeed, the allegations in the classroom are quite similar to the problems that Schweitzer and his colleagues found in the business world.

"The 'No Child Left Behind' idea is compelling -- after all, who wants to leave a child behind?" Schweitzer says. "But the reality of this program is that it is fundamentally flawed. It is very difficult to monitor education, and this program narrows the focus of teachers in a domain that requires cooperation, innovation, broad thinking, high ethical standards and, we would hope, intrinsic motivation."

Schweitzer believes one reason that goals are overused is that we focus too much attention on the individual. When things go wrong -- for example, following the collapse of an Enron -- we tend to blame specific individuals rather than look at the broader culture established by top managers. The best-known example of this problem comes from the U.S. military and the well-documented detainee abuse at the Abu Ghraib prison in Iraq, he says. These cases of abuse were blamed on low-ranking soldiers -- "a few bad apples" -- and not on the broader directives from the Pentagon that created the climate of corruption. "What happens is that people neglect to appreciate the importance of the environment."

The authors suggest that goal setting should be undertaken modestly and carefully, with a focus more on personal rather than financial gain. They also make the case that much more research -- and more skepticism -- is needed about the practice of goal setting. "Rather than dispensing goal setting as a benign, over-the-counter treatment for students of management, experts need to conceptualize goal setting as a prescription-strength medication that requires careful dosing, consideration of harmful side effects, and close supervision," the authors write. "Given the sway of goal setting on intellectual pursuits in management, we call for a more self-critical and less self-congratulatory approach to the study of goal setting."


Read more...

Thursday, March 5, 2009

The Art of Making Quality Decisions

Description: Making quality decisions is an intricate tapestry of experience, inquiry, and judgment that converge to form solutions. According to Michael Sacks and Steve Walton, professors at Emory University's Goizueta Business School, there are strategies that can be adopted to make the process more effective.


It’s challenge enough for individuals to make decisions about things they know a lot about. It’s even tougher for them to make decisions in the face of uncertainty or when the information available is being poked, prodded and packaged to influence their decisions.

Individuals also have biases in terms of the information they consider important—whether it’s what they want to hear, an allegiance to what is already known, or if the information comes from a source with whom the individual agrees (i.e. when it comes to political commentary, Republicans are more likely to turn to conservative radio talk show host Rush Limbaugh whereas Democrats might seek out the opinion of MSNBC news anchor and political commentator, Keith Olbermann).

“People often have pre-existing opinions and positions and they seek out sources likely to confirm those opinions,” says Michael Sacks, associate professor in the practice of organization & management at Emory University's Goizueta Business School. Sacks worked for the U.S. Government prior to the 1992 Presidential election. He watched two candidates use the exact same unemployment data but it in ways that validated their very different positions. “It’s neither incorrect nor inappropriate,” Sacks explains during a recent Emory Executive Education seminar on Leadership and Decision Making that he co-led, alongside Steve Walton, associate professor in the practice of information systems & operations management at Goizueta. While numbers are objective, the way they’re used is completely subjective. “We have to be smart and well trained as decision makers to pick that apart,” Sacks adds.

For instance, this past September, after a string of financial institution failures, a group of U.S. legislators penned a $700 billion “Bail Out Bill” in an effort to stabilize U.S. financial markets. But enough members of the U.S. House of Representatives were wary of the bill and it failed to pass the House. The bill’s authors reworked the bill and renamed it. On October 1st, the Senate passed the $700 billion “Economic Rescue Package.” A day later, the U.S. House of Representatives also passed the bill.

According to Sacks, the framing of information is just as important as the information itself and people respond differently based on how the information is framed. Republican Ileana Ros-Lethtinen of Florida voted “No” on the original version of the bill but was swayed by the newer model. The Associated Press reported that Ros-Lethtinen described the original version as “A bailout bill for Wall Street firms,” but the later bill, armed with “relief for taxpayers and hard hit families,” she said, was “an economic rescue package.” No doubt the “framing” of the second bill as a “rescue package” had some influence on the 58 members of the U.S. House of Representatives who switched their votes.

“How you frame [data] can make it less bad,” says Sacks. “You pick the information you want to emphasize,” Politics is one area where the power of framing runs amok. “Late term” versus “partial birth” abortion, a “surge” strategy versus an “escalation” strategy in the war in Iraq, a tax “cut” versus tax “relief.”

The power of incentive also influences decisions and is a type of framing. “People perceive risk differently if it’s a gain or a loss,” says Sacks, adding that it’s more painful to lose a dollar than it is pleasurable to gain one. If a manager wants his employees to make a change, he won’t get as much out of them if he presents an initiative or policy as leading to a potential windfall as when he frames not doing so as resulting in a crippling loss.

Individuals also have a tendency to base their judgments on information that is readily available to them, vivid and easily imagined. Better known in decision-making lingo as an “availability heuristic,” it influences decisions all the time.

If, over the course of a football game, one team’s kicker makes three field goals but misses a potential game winning field goal, fans will remember the one he missed—not the three that got the team to that point. Because the miss is more vivid and the one fans talk the most about, if asked to make a decision about whether or not the kicker were any good, people may decide “no” based on the vividly recalled miss.

When the D.C., or “Beltway” Sniper was on the rampage in 2002, some people drove to Maryland to get out of harms way. In fact, there was a far greater chance that they’d die in a car accident on their way out of D.C. than there was that they’d be killed by the sniper. “Widely publicized events get disproportionate weight. It’s the same in the workplace,” notes Sacks. “Ever had to go to a series of meetings over one strange fluky thing?”

Associating a decision with a preexisting category—better known as “representative heuristic” or stereotyping—is also common in the work place, and it can result in discrimination. If a manager notices that three employees who graduated from the same school are considered poor workers, he may predict that a current job applicant—with a degree from the same school—will not be a good employee either. “It’s about coming to a conclusion about people based on categories they may not fit,” Sacks says. “People aren’t aware they do this, that’s why is so pernicious. It’s really powerful stuff.”

According to Walton, people’s judgment is influenced not only by data and research, but by their opinions, history and experience—all of which are subject to bias. For instance, decision-makers often go with the first reasonable option available although it may not be the best, a common blunder political scientist Herbert Simon dubbed “satisficing.” “Sense-making,” satisficing’s decision-making cousin, is what happens when people make a quick decision and then, after the fact, rationalize why they made the decision.

Anchoring can also cause a decision-making blunder and occurs when individuals rely too heavily on a singular piece of information, or “anchor” when faced with a decision. Sacks pointed to a 1974 study by Amos Tversky and Daniel Kahneman where they asked people to guess the percentage of African nations that are members of the United Nations. Those who were asked, “Is it more or less than 45 percent?” answered differently than those asked if it were more or less than 65%. “People are very strongly affected by an anchor set up,” notes Sacks.

Why do more people go for a “Buy One Get One 50% off” deal than a 25 percent off sale (even though the amount saved is exactly the same)? Why is gas priced at $1.99 a gallon and not $2.00? “Even if the anchor makes no sense, people are likely to fall for it,” says Sacks. “Numbers are powerful, but that power can be dangerous.” Benchmarking is a form of anchoring. Is someone who got a GMAT score of 660 really any less smart than someone who got a 680?

Psychological accounting is another trick the brain plays on decision-makers. A couple that goes to an art festival and likes a piece of artwork but resists buying it because it costs $100 then spends $110 on dinner has been tripped up by psychological accounting. “We create these artificial categories and we perceive them differently. We respond to the category not the expenditure,” notes Walton. “It’s how it’s perceived versus how it really is.”

Psychological accounting can prove hazardous in the workplace. If a business unit is rewarded based on its performance but not based on the business as a whole, it can result in a misaligned company; one where business units work to better their performance even if it hurts another business unit and the company overall.

Escalation of commitment is another common decision-making mistake. For example, a group of friends arrive at a favorite restaurant and are told the wait is 45 minutes. After an hour passes, they approach the hostess and are told it will be a little while longer. In the meantime, a member of the group discovers that a new, well-reviewed restaurant just down the street has no wait. Does the group go to the new restaurant? A study that chronicles a similar situation showed that 80% of people would continue waiting. Why? Because if they left, they’d perceive the time they invested at the first restaurant as lost. By continuing to wait, note Sacks, they’re able to think there’s been value in the time they spent waiting.

“Organizations do it all the time,” says Sacks. “They have a strategy or a rule and they keep following it even if another course of action is preferable.” Years ago, evidence pointed to the superiority (and cost effectiveness) of using polyester to make tire cords, but the E.I. duPont de Nemours and Company (DuPont) decided to continue using nylon in its tires rather than reconfigure its factories. DuPont lost millions and eventually abandoned the tire business entirely.

“Organizations often run on inertia. Old policies take on a life of their own beyond their usefulness,” says Sacks.

More often than not, these biases appear en masse. “All of these things play together,” explains Walton. “And these things play badly with one another.” A leader can make an emotional commitment, then build an entire story via sense-making that rationalizes his or her commitment to a policy, a product, or a strategy, even if there’s evidence that things aren’t going well. And don’t always look to the data to help, says Sacks. Even qualitative data can be perceived distinctly depending on how it’s framed, collected and used in analysis.

That said, there are qualitative tools that can correct bias and many successful companies and leaders use them. Firms can use well-designed qualitative interviews and surveys to discover key factors about organizational performance. Focus groups can help managers analyze trends across groups, and qualitative data can be partially quantified using scoring models.

There are other fairly straightforward ways to reduce bias. To combat sense-making, leaders should seek disconfirming evidence. Assign a “devil’s advocate” role on a team and seek diversity of thought when it comes to team members. “It’s great to be a collaborative team member, but sometimes you have to push each other,” notes Sacks. “With a more diverse team, you’re more likely to have someone on the team who doesn’t buy in.”

To fight off “satisficing,” leaders need to push themselves and their peers past the first acceptable option to find a better one. Assessing policies to determine if changing is preferable to the status quo will help prevent the escalation of commitment, and challenging the accuracy of isolated examples will help stave off availability and representative heuristics.

There are, however, times when a leader or manager is asked to make a decision and certain circumstances are out of his or her control. In these cases, quantitative problem solving is an effective way to make a focused decision, and building a decision tree is often times a productive way of laying out the outcomes of decisions that deal with uncertainty.

Weather affects decision-making in some industries, but isn’t something the decision-maker can control. In September 1976, a storm approached California’s Napa Valley, home of the Freemark Abbey Winery and where William Jaeger, one of the winery’s partners, faced a decision: in light of the coming storm, should the winery harvest its Riesling grapes immediately or leave them on the vines? Jaeger knew that a storm shortly before the harvest could be detrimental, often ruining the crop. But a warm, light rain prior to harvest can cause growth of a beneficial mold, botrytis cinerea. This mold results in a complex wine highly valued by connoisseurs and that sells for more than double the normal price.

Walton uses this winery case study in class to teach executive students the process of building a decision tree. To be effective, explains Walton, leaders must determine the factors that influence the payoffs associated with a decision, assign probabilities for each event, and estimate the payoff that will occur if the event occurs.

In order to make his decision, Jaeger laid out the various cost and payoffs associated with a decision. Each branch of the decision tree was dictated by “states of nature,” or factors beyond the control of the winery. At the terminal end of each branch is an outcome and the probability that the outcome will occur. If Jaeger decides not to harvest the grapes, his payoff depends on whether or not the storm hits. If the storm does come, revenues depend on whether or not the mold forms. If there is no storm, the selling price of the wine will depend on sugar concentration levels. Once the tree was laid out, Jaeger used backward induction—or picking out the best decision at each decision node—until he arrived at the primary decision: not to harvest immediately.

It’s possible that a manager’s decision may be swayed by incorrectly assessing probabilities (i.e. What if the chance of rain for the Napa Valley had been 80%—not the 50% as reported?) But with sensitivity analysis, leaders like Jaeger can play with the numbers and determine the impact of different probabilities on expected value.

But even when leaders utilize the best possible quantitative decision analysis, many decisions, notes Walton, are filtered out at each step of the decision-making framework because of qualitative decision-making. Additionally, if the data collected isn’t salient, the decisions made based on it won’t be helpful and could be harmful to an organization. “You’re better off having no data than bad data,” Walton explains.

In order to make sound decisions, it’s important that leaders make sure the decision is correctly framed, that the assumptions surrounding the decision are evaluated, that quantitative and qualitative analysis is used when necessary and appropriately and that the data—and the analysis of it—is intelligently questioned before committing to a decision. “Even when you know about biases,” says Walton, “you have to act intentionally to overcome them.”

The Sacks and Walton team partner to teach in Goizueta’s globally-ranked Emory Executive Education. Critical Thinking: Making the Right Decisions Now is a part of Emory Executive Education’s Leadership Suite, aimed at emerging leaders who need to build tangible skills quickly while minimizing time away from the job. These interactive and experiential programs feature a balance of behavioral learning and tactical frameworks to hone skills that will impact performance immediately.

Critical Thinking—Making the Right Decisions Now is a highly applied and experiential two-day program that challenges executives with a sequence of exercises requiring participants to engage in thoughtful and strategic decision making practices. For more information, please contact Emory Executive Education at execed@bus.emory.edu or by calling 404-727-2200.

Photo: Professors Steve Walton, left, and Michael Sacks


Read more...