Read the article summarize the article 1 paragraph answer h
.
Read the article
summarize the article (1 paragraph)
answer how this article relates to the class material? (1-2 paragraphs)
critic the article - what is lacking or what could the article do better or did they skip over or miss an important part of this topic (1 paragraph)
Nine Habits That Lead
to Terrible Decisions
by Jack Zenger and Joseph Folkman
Some possibilities immediately came to
mind: People make poor decisions when they
are under severe time pressure or don’t have
access to all of the important information (unless
they are explaining the decision to their
boss, and then it is often someone else’s fault).
But we wanted a more objective answer. To
understand the root causes of poor decision
making, we looked at 360-degree feedback data
from more than 50,000 leaders and compared
the behavior of those who were perceived to be
making poor decisions with that of the people
who were perceived to be making good decisions.
We did a factor analysis of the behaviors
that showed the greatest statistical diNerence
between the best and worst decision makers.
Nine factors emerged as the most common
paths to poor decision making. Here they are in
order from most to least signiOcant.
1. Being lazy. When people failed to check
facts, take the initiative, conOrm assumptions,
or gather additional input, they were perceived
to be sloppy in their work and unwilling to put
themselves out. They relied on past experience
and expected results simply to be an extrapolation
of the past.
SEVERAL YEARS ago we came up with a great idea for a new leadershipdevelopment
oNering. We had research showing that when people embarked
on a self-development program, their success increased dramatically when
they received follow-up encouragement. We developed a software application
to oNer that sort of encouragement. People could enter their development
goals, and to motivate them to keep going, the software would send
them reminders every week or month asking how they were doing. We invested
a lot of time and money in this product.
But it turned out that people didn’t like receiving the e-mails and found
them more annoying than motivating. Some of our users came up with a
name for this type of software: “nagware.” Needless to say, this product never
reached the potential we had envisioned. Thinking about our decisions to
create this ultimately disappointing software application, we asked ourselves,
What causes well-meaning people (like us) to make poor decisions?
LOGIC
FROM HBR.ORG
VOICES
Getty Images
HBR.org |
WINTER 2015
|
Harvard Business Review OnPoint 23
2. Not anticipating unexpected
events. It is discouraging to always consider
the possibility of negative events
in our lives, so most people assume the
worst will not happen. Unfortunately,
bad things happen fairly often. People
die, get divorced, and have accidents.
Markets crash, home prices go down,
and friends are unreliable. There is excellent
research demonstrating that when
people take the time to consider what
might go wrong, they are actually good at
anticipating problems. But many people
get so excited about their decisions that
they never allow time for that simple
due diligence.
3. Not making a decision. At the
other end of the scale, when people
are faced with a complex decision that
will be based on constantly changing
data, it’s easy to continue to study the
data, ask for one more report, or perform
yet another analysis before choosing a
course of action. When the reports and
the analysis take much longer than expected,
poor decision makers delay, and
the opportunity is missed. It takes courage
to look at the data, consider the consequences
responsibly, and then move
forward. Indecision is often worse than
making the wrong decision. The people
most paralyzed by fear are those who
believe that one mistake will ruin their
careers, so they avoid any risk at all.
4. Remaining locked in the past.
Some people make poor decisions because
they’re using the same old data
or processes they always have. They get
used to approaches that worked in the
past and tend not to look for better ones
(the devil they know rather than the one
they don’t). But, too often, when a decision
is destined to go wrong, it’s because
the old process is based on assumptions
that are no longer true. Poor decision
makers fail to reLect on those assumptions
when applying the tried and true.
5. Having no strategic alignment.
Bad decisions sometimes stem from
a failure to connect the problem to an
overall strategy. Without a clear strategy
that provides context, many solutions
appear to make sense. When tightly
linked to a clear strategy, the better solutions
quickly begin to rise to the top.
6. Depending too much on others.
Some decisions are never made, because
one person is waiting for another, who in
turn is waiting for someone else’s decision
or input. ENective decision makers
Ond a way to act independently when
necessary.
7. Remaining isolated. Some leaders
are waiting for input because they
haven’t taken steps to get it in a timely
manner or haven’t established the relationships
that would enable them to
draw on other people’s expertise when
they need it. All of our (and many others’)
research on eNective decision making
recognizes that involving others who
have the relevant knowledge, experience,
and expertise improves the quality
of the decision. That truth is not news,
but why is it true? Sometimes people
lack the necessary networking skills to
access the right information. Other times,
we’ve found, people do not involve others
because they want the credit for a
decision. Unfortunately, they also get to
take the blame for the bad decisions.
8. Lacking technical depth. Organizations
today are complex, and even the
best leaders don’t have enough technical
depth to fully understand multifaceted
issues. But when decision makers rely on
others’ knowledge and expertise without
any perspective of their own, they have
a diYcult time integrating that information
to make eNective decisions. And
when they lack even basic knowledge
and expertise, they have no way to tell
whether a decision is brilliant or terrible.
We continue to Ond that the best executives
have deep expertise. And if they
don’t have the technical depth to understand
the implications of the decisions
they face, they make it their business to
Ond the talent they need to help them.
9. Failing to communicate the what,
where, when, and how associated
with their decisions. Some good decisions
become bad decisions because
people don’t understand—or even know
about—them. Communicating a decision,
and its rationale and implications,
is critical to implementing it successfully.
It’s no wonder good people make bad
decisions if they fail to get others’ input
in time, to get the right input at the right
time, to understand that input (because
of insufficient skills), to understand
when something that worked in the
past will not work now, to know when
to make a decision without all the right
information and when to wait for more
advice. The path to good decision making
is narrow, and it’s far from straight.
But keeping this list of pitfalls in mind
can make any leader a more eNective decision
maker.
Originally published on HBR.org
September 1, 2014
Jack Zenger is the CEO and
Joseph Folkman is the president of
Zenger Folkman, a leadership development
consultancy. They are coauthors of the
article “Making Yourself Indispensable”
(HBR, October 2011) and the book How
to Be Exceptional: Drive Leadership
Success by Magnifying Your Strengths
(McGraw-Hill, 2012).
READER COMMENTS
The biggest—and most common—bad
habit is missing from this list: not asking
others before a decision is made. If
you see a problem from several people’s
perspectives, it is like seeing a 3-D image
instead of a flat one. And if work must be
done to realize the vision, sharing that
workload and spreading the vision of what
success looks like makes the difference
between success and failure.
—Peter Johnston,
commercial director, Datatest
Indecision is
often worse
than making the
wrong decision.
24 Harvard Business Review OnPoint |
WINTER 2015 |
HBR.org
FROM HBR.ORG
VOICES
Unless sufficient time is allowed for the
proper decision-making process, failure
will prevail. Too many business decisions
are performed in the pressure cooker of
office dynamics.
—Fiona MacCarthy, director,
Mastercraft Enterprises
These habits have been allowed to develop
because the process of decision making
has not been systematized and the
decision makers have never been trained
in how to make them methodically. How
many organizations have decision-making
methodologies and frameworks that they
regularly use? Most probably just assume
that their decision makers know how to
make decisions.
—Richard Davis, senior vice president
and COO, Katalyst Data Management
Relearning the
Art of Asking
Questions
by Tom Pohlmann and
Neethi Mary Thomas
PROPER QUESTIONING has become a
lost art. The curious four-year-old asks
a lot of questions—incessant streams of
“Why?” and “Why not?” might sound familiar—but
as we grow older, our questioning
decreases. In a recent poll of
more than 200 of our clients, we found
that those with children estimated that
70% to 80% of their kids’ dialogues with
others included questions. But those
same clients said that only 15% to 25% of
their own interactions consisted of questions.
Why the drop-oN?
Think back to your childhood.
Chances are, you received the most recognition
or rewards when you got the
correct answers. Later in life, that incentive
continues. At work, we often reward
people who answer questions, not those
who ask them. Questioning conventional
wisdom can even lead to being sidelined,
isolated, or considered a threat.
Because expectations for decision
making have gone from “get it done soon”
to “get it done now” to “it should have
been done yesterday,” we tend to jump
to conclusions instead of asking more
questions. The unfortunate side eNect
of not asking enough questions is poor
decision making. That’s why it’s imperative
that we slow down and take the time
to ask more—and better—questions. At
best, we’ll arrive at better conclusions; at
worst, we’ll avoid a lot of rework later on.
Aside from not speaking up enough,
many professionals don’t think about
how diNerent types of questions can lead
to diNerent outcomes. You should steer
a conversation by asking the right kinds
of questions for the problem you’re trying
to solve. In some cases, you’ll want to
expand your view of the problem, rather
than keeping it narrowly focused. In
others, you’ll want to challenge basic assumptions
or aYrm your understanding
in order to feel more conOdent in your
conclusions.
Consider four types of questions—
clarifying, adjoining, funneling, and elevating—each
aimed at achieving a different
goal.
Clarifying questions help us better
understand what has been said. In
many conversations, people speak past
one another. Asking clarifying questions
can help uncover the real intent. This
helps us understand one another better
and leads us to ask relevant follow-up
questions. Both “Can you tell me more?”
and “Why do you say so?” fall into this
category. People often don’t ask these
questions; they typically make assumptions
and complete any missing parts
themselves.
Adjoining questions are used to explore
related aspects of the problem that
are ignored in the conversation. Questions
such as, “How would this concept
apply in a diNerent context?” or “What
are the related uses of this technology?”
fall into this category. For example, asking
“How would these insights apply in
Canada?” during a discussion on customer
lifetime value in the U.S. can open
a useful discussion on behavioral diNer-
HBR.org |
WINTER 2015
|
Harvard Business Review OnPoint 25
ences between customers in the U.S. and
Canada. Our laserlike focus on immediate
tasks often inhibits our asking more
of these exploratory questions, even
though they could help us broaden our
understanding of an issue.
Funneling questions are used to dive
deeper. We ask these to understand how
an answer was derived, to challenge assumptions,
and to understand the root
causes of problems. Examples include
“How did you do the analysis?” and
“Why didn’t you include this step?” Funneling
can naturally follow the design of
an organization and its oNerings, such
as, “Can we take this analysis of outdoor
products and apply it to a certain
brand of lawn furniture?” Most analytical
teams—especially those embedded
in business operations—do an excellent
job of using these questions.
Elevating questions raise broader
issues and highlight the bigger picture.
They help you zoom out. Being too immersed
in an immediate problem makes
it harder to see the overall context behind
it. So you can ask, “Taking a step
back, what are the larger issues?” or “Are
we even addressing the right question?”
For example, a discussion on issues such
as margin decline and decreasing customer
satisfaction could turn into a more
in-depth discussion of corporate strategy
with an elevating question: “Instead
of talking about these issues separately,
what are the larger trends we should be
concerned about? How do they all tie
together?” These questions take us to a
higher playing Oeld, where we can better
see connections between individual
problems.
In today’s always-on world, there’s a
rush to answer. Ubiquitous access to data
and volatile business demands accelerate
this sense of urgency. But we must
slow down and understand one another
better to avoid poor decisions and succeed
in this environment. Because asking
questions requires a certain amount
of vulnerability, corporate cultures must
shift to promote this behavior. Leaders
should encourage people to ask more
questions, relevant to the desired goals,
instead of rushing them to deliver answers.
To make the right decisions, start
asking the questions that really matter.
Originally published on HBR.org
March 27, 2015
Tom Pohlmann is the head of values and
strategy at Mu Sigma. He was formerly
the chief marketing and strategy officer
for Forrester Research and previously led
the company’s largest business unit and
all of its technology research. Neethi
Mary Thomas is the senior engagement
manager at Mu Sigma, where she leads
global engagements for Fortune 500 and
hypergrowth clients on the West Coast.
READER COMMENTS
Leaders often ask questions that start with
“Do you know what I...” (Do you know what
I mean? Do you know what I want? Do you
know what I said?) These questions presume
the listener knows, but is that reasonable?
No, because yes is almost always the
answer. No one wants to tell the boss they
don’t know what he just said. The problem
and the fix reside with the boss. Too many
leaders and managers speak too quickly.
People hear what they hear and remember
what they remember, often inaccurately.
—Robert F. Gately
strategic business partner,
Profiles International | Gately Consulting
Asking “stupid” questions is often a matter
of addressing what others want—but
don’t dare—to ask. If you ask a stupid
question and provide feedback—in words,
sometimes followed by relevant action—
you demonstrate understanding and
empower others to do the same. You will
look smart, except to people who aren’t
thinking clearly at the moment. What’s
more, you might avoid making a mistake,
or better yet, do something great. Smart
people would rather risk looking stupid
than be stupid.
—Jose Harris, principal systems engineer,
LinQuest Corporation
Preparing for
Decision-Making
Meetings
by Stever Robbins
TIM’S E-MAIL seemed like an innocentenough
request. “Our graphic designer
missed this week’s deadline. Gather in
the conference room at 10 to decide what
to do.” Because he never actually said
“meeting,” Tim’s message caught me oN
guard. “Gather” sounded like a family
picnic, with golden retrievers and frolicking.
Nothing could have been further
from the truth.
In the conference room, comments
started Lying. “Our proofs are always
late!” “Maybe we should switch designers.”
“Have we thought to ask for regular
status reports?” “Our current designer is
too expensive.” “Are we trying to Ox the
designer, or Ogure out what to do to get
back on schedule?” We were having a
ball venting, but after an hour, we hadn’t
made much progress. We merely wasted
everyone’s time.
Every decision-making conversation
has three hidden conversations lurking
just out of sight. One is about what we’re
trying to accomplish by even bothering
to make a decision; after all, we could
just let things fall where they may. The
second is about the criteria we’ll use to
make the decision. The third is about
Onding and choosing among diNerent
Getty Images
options.
People jump
to conclusions
instead of asking
questions.
26 Harvard Business Review OnPoint |
WINTER 2015 |
HBR.org
FROM HBR.ORG
VOICES
It’s easy to let those conversations become
intertwined. In the meeting we’ve
just described, all three conversations
got jumbled. Decision criteria such as
on-time delivery and cost got mixed up
with possible options, such as switching
designers and asking for status reports.
No one even followed up on the question
of why we were there: to evaluate the designer
or just recover from a schedule slip.
My Orst boss once said, “Never call
a meeting to make a decision. Work
with people one-on-one, and then call
the meeting to let the group share and
own the decision that’s been made.” It
was great advice. Even if you can’t make
the decision airtight before the meeting,
you’ll save time in the long run by having
short individual conversations with
team members to frame the discussion.
Chat with each team member before
the meeting. Use those talks to reach
alignment—or identify areas of contention—on
the three conversations.
Ask what each person thinks the decision
should accomplish, what criteria
are most important, and what options
should be considered. Here is how Tim’s
conversation with me could have gone.
Tim: We’re meeting to discuss what
to do about the designer’s schedule slip.
What do you think the purpose of the
meeting should be?
Stever: Let’s Ogure out what to do to
get back on schedule. Before the next
project, we can decide if we want to stay
with the same designer.
Tim: What criteria should we be using
to evaluate options for getting back on
schedule?
Stever: We should get back on schedule
in a way that is least disruptive to
people’s personal lives. We should also
keep costs reasonable, but that’s a second
priority.
Tim: What other options are you
think ing of?
Stever: We could use the design from
our last product. We could use whatever
the designer has done so far and hope
that it’s good enough. We could hire an
intern to work on the instruction book
while the designer works on the product.
Each person Tim talks to will either
reinforce the purpose, criteria, and options,
or add new ones. Tim can work
on reaching alignment during the small
talks, in which case the meeting itself is
just a celebration of the decision that has
already been made.
If we disagree in places, Tim’s ready.
He can Orst highlight where we all agree.
Then the team can work toward alignment
on a common purpose for the
meeting. Once we’re aligned around a
purpose, we can choose the criteria we’ll
use to make the decision. Then we can
gather the options from the small meetings
or brainstorm more, if needed. Last,
we can evaluate and choose an option
using the criteria.
By laying the right predecision
groundwork one-on-one, you can greatly
speed up your decision-making meetings
by arriving with clarity at the three
conversations underlying the decision.
You get the additional bonus of collecting
all the criteria and options the group
might suggest person by person, without
the interpersonal tensions that can prevent
people from speaking up in a group.
Originally published on HBR.org
April 28, 2010
Stever Robbins is the CEO of Ideas
Unleashed, a company that helps thought
leaders turn their audience into income.
He is a serial entrepreneur, a top 10 iTunes
business podcaster (“The Get-It-Done Guy”),
and an executive coach. He has taught at
Babson College on building social capital
and co-led Harvard Business School’s
Foundation design team during the Leadership
and Learning curriculum redesign
initiative.
How to Clone
Your Best
Decision Makers
by Michael C. Mankins
and Lori Sherer
ANY COMPANY’S decisions lie on a spectrum.
On one end are the small, everyday
decisions that add up to a lot of value
over time. Amazon, Capital One, and
others have already Ogured out how to
automate many of these, like whether
to recommend product B to a customer
who buys product A or what spending
limit is appropriate for customers with
certain characteristics.
On the other end of the spectrum
are big, infrequent strategic decisions,
such as where to locate the next $20 billion
manufacturing facility. Companies
assemble all the data and technology
they can Ond to help with such decisions,
including analytics tools such as
Monte Carlo simulations (computational
algorithms that use random sampling to
obtain numerical results). But the choice
ultimately depends on senior executives’
judgment.
In the middle of the spectrum, however,
lies a vast and largely unexplored
territory. These decisions—both relatively
frequent and individually important,
requiring the exercise of judgment
and the application of experience—represent
a potential gold mine for the companies
that get there Orst with advanced
analytics.
Imagine, for example, a property
and casualty company that specializes
in insuring multinational corporations.
For every customer, the company
might have to make risk-assessment
decisions about hundreds of facilities
around the world. Armies of underwriters
make these types of decisions,
with each underwriter more or less experienced
and each one weighing and
Never call a
meeting to
make a decision.
sequencing the dozens of variables
diN erently.
Now imagine that you employ advanced
analytics to codify the approach
of the best, most experienced underwriters.
You build an analytics model that
captures their decision logic. The armies
of underwriters then use that model
in making their decisions. This is not
so much crunching data as simulating
a human process.
What happens? The need for human
knowledge and judgment hasn’t disappeared—you
still require skilled, experienced
employees. But you have changed
the game, using machines to replicate
best human practice. The decision process
now leads to results that are:
• Generally better. Incorporating expert
knowledge makes for more-accurate,
higher-quality decisions.
• More consistent. You have reduced
the variability of decision outcomes.
• More scalable. You can add underwriters
as your business grows and bring
them up to speed more quickly.
In addition, you have suddenly increased
your organization’s test-andlearn
capability. Every outcome for every
insured facility feeds back into the
modeling process, so the model gets better
and better. So do the decisions that
rely on it.
Using analytics in this way is no small
matter. You’ll find that decision processes
are aN ected. And not only do you
need to build the technological capabilities,
you’ll also need to ensure that your
people adopt and use the new tools. The
human element can sidetrack otherwisepromising
experiments.
We know from extensive research that
decisions matter. Companies that make
better and faster decisions, and implement
them eN ectively, turn in better
O nancial performance than rivals and
peers. Focused application of analytics
tools can help companies make better,
quicker decisions—particularly in that
broad middle range—and improve their
performance accordingly.
Originally published on HBR.org
September 9, 2014
Michael C. Mankins is a partner at Bain &
Company. He is based in San Francisco and
formerly led Bain’s organization practice in
the Americas. Lori Sherer is a partner at
Bain & Company in San Francisco and heads
the fi rm’s advanced analytics practice.
READER COMMENTS
The senior underwriters on which the
system is based gained experience by
making mistakes. Where will you fi nd
your next generation of performing underwriters
if the new ones only copy what the
machines did?
—Guillaume Liénard,
corporate fi nance manager,
Allianz Worldwide Partners
The problem with analytics projects is the
assumption that everyone knows what
to measure. Important parts of expert
performance are tacit and unconsciously
competent. You can codify expert decision
strategies into processes and algorithms
only once you know how the best perform.
—Ian McKenna,
managing consultant, Celevere
You need to build the technological
capabilities, but also ensure that
your people adopt and use the tools.
Are you an
effective
manager?
Managing people is fraught
with challenges: What really
motivates people? How do you
deal with problem employees?
How can you build a team that
is greater than the sum of its
parts? Learn how to successfully
leverage the power of people
with HBR’s 10 Must Reads on
Managing People.
US $24.95 | Product #12575
Available as a digital download
or paperback.
Order online at hbr.org
or call toll-free 800-668-6780
(+1-617-783-7450 outside the U.S.
and Canada).
AVAILABLE WHEREVER
BOOKS ARE SOLD
28 Harvard Business Review OnPoint |
WINTER 2015 |
HBR.org
FROM HBR.ORG
VOICES
What Research
Tells Us About
Making Accurate
Predictions
by Walter Frick
“PREDICTION IS very diYcult,” the old
chestnut goes, “especially about the
future.” And for years, social science
agreed. Numerous studies detailed the
forecasting failures of even so-called
experts. Predicting the future is just too
hard, the thinking went; HBR even published
an article (“Six Rules for ENective
Forecasting,” July–August 2007) about
how the art of forecasting wasn’t really
about prediction at all.
That’s changing, thanks to new
research.
We know far more about prediction
than we used to, including the fact that
some of us are better at it than others.
But prediction is also a learned skill, at
least in part—it’s something we can all
become better at with practice. And
that’s good news for businesses, which
have tremendous incentives to predict a
myriad of things.
The most famous research on prediction
was done by Philip Tetlock, of the
University of Pennsylvania. His seminal
book Expert Political Judgment: How
Good Is It? How Can We Know? (Princeton
University Press, 2005) provides crucial
background. Tetlock asked a group of
pundits and foreign-aNairs experts to
predict geopolitical events, like whether
the Soviet Union would disintegrate by
1993. Overall, the “experts” struggled
to perform better than “dart-throwing
chimps” and were consistently less accurate
than even relatively simple statistical
algorithms. This was true of liberals
and conservatives, and regardless of professional
credentials.
But Tetlock did uncover one style of
thinking that seemed to aid prediction.
Those who preferred to consider multiple
explanations and balance them
together before making a prediction
performed better than those who relied
on a single big idea. Tetlock called the
Srst group “foxes” and the second group
“hedgehogs,” after the essay “The Hedgehog
and the Fox,” by Isaiah Berlin. As
Tetlock writes:
The intellectually aggressive hedgehogs
knew one big thing and sought,
under the banner of parsimony, to
expand the explanatory power of that
big thing to “cover” new cases; the
more eclectic foxes knew many little
things and were content to improvise
ad hoc solutions to keep pace with
a rapidly changing world.
Since the book, Tetlock and several
colleagues have been running a series
of geopolitical forecasting tournaments
(which I’ve dabbled in) to discover what
helps people make better predictions.
Over the past six months, Tetlock, Barbara
Mellers, and several of their Penn
colleagues have released three new papers
analyzing 150,000 forecasts by 743
participants (all with at least a bachelor’s
degree) competing to predict 199 world
events. One paper focuses solely on highperforming
“superforecasters,” another
looks at the entire group, and a third
makes the case for forecasting tournaments
as a research tool.
The main Snding? Prediction isn’t a
hopeless enterprise—the tournament
participants did far better than blind
chance. Think about a prediction with
two possible outcomes, such as who will
win the Super Bowl. If you pick at random,
you’ll be wrong half the time. But
the best forecasters were consistently
able to cut that error rate by more than
half. As Tetlock put it to me, “The best
forecasters are hovering between the
chimp and God.”
Perhaps most notably, top predictors
managed to improve over time, and
several interventions on the part of the
researchers improved accuracy. So the
second Snding is that it’s possible to get
better at prediction, and the research offers
some insights into the factors that
make a diNerence.
Intelligence helps. The forecasters
in Tetlock’s sample were a smart bunch,
and even within that sample, those who
scored higher on various intelligence
tests tended to make more-accurate
predictions. But intelligence mattered
more early on than it did by the end of
the tournament. It appears that when
you’re entering a new domain and trying
to make predictions, intelligence is a
big advantage. Later, once everyone has
settled in, being smart still helps, but not
quite as much.
Domain expertise helps, too. Forecasters
who scored better on a test of
political knowledge tended to make better
predictions. If that sounds obvious,
remember that Tetlock’s earlier research
found little evidence that expertise matters.
But whereas fancy appointments
Getty Images
HBR.org |
WINTER 2015
|
Harvard Business Review OnPoint 29
and credentials might not have correlated
with good prediction in earlier research,
genuine domain expertise does
seem to.
Practice improves accuracy. The
top-performing “superforecasters” were
consistently more accurate, and only became
more so over time. A big part of that
seems to be that they practiced more,
making more predictions and participating
more in the tournament’s forums.
Teams consistently outperform individuals.
The researchers split forecasters
up randomly so that some made their
predictions on their own, while others
did so as part of a group. Groups have
their own problems and biases, as the
article “Making Dumb Groups Smarter”
(HBR, December 2014) explains, so the
researchers gave the groups training on
how to collaborate eNectively. Ultimately,
those who were part of a group made
more-accurate predictions.
Teamwork also helped the superforecasters,
who after the Orst year were put
on teams with one another. This only
improved their accuracy. These superteams
were unique in one other way: As
time passed, most teams became more
divided in their opinions, as participants
became entrenched in their beliefs. By
contrast, the superforecaster teams
agreed more and more over time.
More-open-minded people make
better predictions. This harkens
back to Tetlock’s earlier distinction between
foxes and hedgehogs. Although
participants’ self-reported status as
“fox” or “hedgehog” didn’t predict accuracy,
a commonly used test of openmindedness
did. Whereas some psychologists
see open-mindedness as a
personality trait that’s static within individuals
over time, there is also some
evidence that each of us can be more
or less open-minded depending on the
circumstances.
Training in probability can guard
against bias. Some of the forecasters
received training in “probabilistic reasoning,”
which basically means they
were told to look for data on how similar
cases had turned out in the past before
trying to predict the future. Humans are
surprisingly bad at this and tend to overestimate
the chances that the future will
diNer from the past. The forecasters who
received this training performed better
than those who did not. (Interestingly,
a smaller group was trained in scenario
planning, but this turned out not to be
as useful as the training in probabilistic
reasoning.)
Rushing produces bad predictions.
The longer participants deliberated before
making a forecast, the better they
did. This was particularly true for those
who were working in groups.
Revision leads to better results.
This isn’t quite the same thing as openmindedness,
though it’s probably related.
Forecasters had the option to go back
later on and revise their predictions in response
to new information. Participants
who revised their predictions frequently
outperformed those who did so less often.
Together these Ondings represent a
major step forward in understanding
forecasting. Certainty is the enemy of
accurate prediction, so the unstated prerequisite
to forecasting may be admitting
that we’re usually bad at it. From there,
it’s possible to use a mix of practice and
process to improve.
However, these Ondings don’t speak
to one of the central Ondings of Tetlock’s
earlier work: that humans typically
made worse predictions than algorithms.
Other research has found that one reliable
way to boost humans’ forecasting
ability is to teach them to defer to statistical
models whenever possible. And
the probabilistic reasoning training described
here really just involves teaching
humans to think like simple algorithms.
You could argue that we’re learning
how to make better predictions just in
time to be eclipsed in many domains by
machines, but the real challenge will be
in blending the two. Tetlock’s paper on
the merits of forecasting tournaments is
also about the value of aggregating the
wisdom of the crowd using algorithms.
Ultimately, a mix of data and human
intelligence is likely to outperform either
on its own. The next challenge is
Onding the right algorithm to put them
together.
Originally published on HBR.org
February 2, 2015
Walter Frick is a senior associate editor
at Harvard Business Review.
READER COMMENTS
As data scientists, our goals are to get the
most out of a computer using its data collection
and analysis while acknowledging
its flaws, and then work with human decision
makers, knowing they also are flawed,
to validate the computer’s analysis. This
allows us to get the best of both worlds to
improve predictions.
—Michael Fischer,
president, MF Consultants
No machine in the world would be
able to predict the irrational choices
people make, particularly when under
stress. Data is nice, but we use it to help
us figure out if we are more or less likely
to be wrong about our predictions, not to
help us determine if we are more or less
likely to be right.
Solution
The article focuses on understanding the most common habits that lead to poor decision making. It shows that being lazy and not considering enough data is one of the major reason that people are making ineffective decision. Lack of clarity about vision and its strategic is another reason that people come up with some terrible options. Occurrences of unexpected events in one’s life also cause turbulence which ultimately leads to futile decisions. Reluctance to make some decision and living into the past are some of common behavioural issues that leads to poor decision making by an individual. The article is trying the find out the most common habits and try it substantiate through some instances. It also speaks about the lack of ability to ask question leads to have limited questions which ultimately leads to half-cooked decisions. One should gather more information by asking relevant question and then analyse it so as to have a comprehensive understanding. It leads to evaluation of various possible option and ultimately selecting the best possible option. This process plays an important role in making an effective decision making.
The article is effective in explaining the habits that leads to terrible decision making but doesn’t substantiate it with proper scientific facts. The results are based on some study that is done on some sample but it doesn’t specify the mix of participants. The study also doesn’t categorize that whether it speaks about personal decisions or professional one because the factor keeps on changing for every situation. The article focuses too much on asking questions but it doesn’t emphasize on the type of question that need to be asked so that one should understand the relevance of asking question. The article can speak more about some template or framework that helps in making step by step effective decision making. Reference of practical approach can make this article more interesting and relevant. Currently, it is behavioural in nature.





























