Google treats performance
management very seriously. As a modern,
forward-thinking, technology-driven organization, Google uses all the best
lessons and tools of emotional and artificial intelligence. There is no “Human Resources” in the company;
at Google, it is called “People Operations.”
Dave P. (name changed for anonymity) has worked at Google since 2007, more than half of the company’s
lifetime (Google started in 1998). He is
an executive in the Global Travel Business unit. Of the “People Operations” naming, Dave says,
“I feel like the renaming of it was pretty easy because that’s what the role
is, and not like this static function that jumped in at evaluation time.”
In the book “Work Rules!: Insights
from Inside Google That Will Transform How You Live and Lead,” Senior Vice
President of People Operations at Google (2006-2016), Laszlo Bock, writes,
“performance management as practiced by most organizations has become a
rule-based, bureaucratic process, existing as an end in itself rather than
actually shaping performance. Employees
hate it. Managers hate it. Even HR
departments hate it.” Through his
tenure, Bock and Google contemplated and worked on ways to disrupt the existing
system to make it better, more useful, and create a consensus of fairness
amongst its employees. Google has done this through goal
setting, performance measurement, peer feedback, a unique calibration process,
and by splitting the rewards conversation from the development conversation.
Goal setting is certainly not
unique to Google. But the company has a
particular terminology for its goal metrics: OKRs, or Objectives and Key
Results. These must be “specific,
measurable, and verifiable; if you achieve all your results, you’ve obtained
your objective,” according to Bock. Company-wide
OKRs are set by CEO Larry Page at the beginning of each quarter. From there, employees can align their
individual OKRs with those of the company.
In full transparency, every employee’s OKRs are visible to all other
employees on an internal website, “right next to their phone number and office
location.” Goals are also purposely
ambitious, with the understanding that employees will not be able to hit them
all, 100 percent of the time. According
to Dave P., “If someone’s goal was to sign four deals, and they sign three, you
are factoring that into their scoring.
Top-level OKRs roll up to management.
You have to hit those or be close to those. You shouldn’t be 100%, but you should be 70%
or 80%.”
How performance is measured at
Google has changed over the years. Up
until 2013, performance was measured on a 41-point rating scale, from 1.0
(awful) to 5.0 (astounding), and on a quarterly basis. “Our 41-point scale came out of our
engineering DNA,” said Bock. “It felt
satisfyingly precise to be able to draw a distinction between a 3.3 and a 3.4
performer.” But, according to Bock, these
precise ratings – which were painstakingly calculated by managers – did not end
up being a precise tool for determining pay and promotion. And, Dave P. says a lot of it actually ended
up being “excruciating” and “guesswork” trying to assess how an employee went
up or down 0.1 of a point. Also, the rigor
of the quarterly ratings system become a lot to bear. It ended up feeling like employees and
managers were continually coming in and out of ratings periods. “When it was quarterly, it was relentless,”
notes Dave P. “You just finished and six weeks later you are already planning
for the next cycle. I think it was
hard. How much really changes (in such
short periods of time)?”
In 2013, Google reviewed its
performance management process in search of major improvement. The company looked at options like removing
job levels or having hundreds of job levels to deal with the issues inherent in
being promoted or not promoted, and it looked at changing the review period
interval. It also took another look at
the 41-point rating system. Bock says that the review brought clarity on
three particular items: 1) consensus was impossible; 2) people took performance
management seriously; 3) experimentation was vital. Where Google landed with its performance
management process was on bi-annual (every six months) reviews and a 5-point
rating scale: needs improvement, consistently meets expectations, exceeds
expectations, strongly exceeds expectations, and superb. Dave P. truly appreciated the change from
quarterly to biannual reviews and said the scale change “made our lives easier
as managers. It (the previous 41-point
scale) was excruciating”
With the sheer quantity of workers at
Google, making sure the review process is fair while finalizing each employees’
rating would see like a herculean task. The
company uses a process called calibration as a final and critical step in its reviews.
How it works, according to Bock, is that “a group of five to ten managers meet
and project on a wall their fifty to a thousand employees, discuss individuals,
and agree on a fair rating.” The process
can take “hours and hours and hours,” according to Dave P. “You’re listening to
the manager talk about what they did, and how they did it. The how becomes very important. And why they gave them (direct reports) a
certain score. You go through it, you
challenge, you ask questions, and everybody through consensus and agreement,
says ‘yeah, that sounds right.’ And then, somebody else goes with somebody at
the same role and the same level. If
this one (employee) exceeds (expectations) and this one (employee) meets
(expectations), does it sound right to you based on what everyone’s heard?” This
ensures that a manager does not feel pressure to inflate the ratings of their
direct reports, and it makes sure that each employee gets more of a wide-ranging
consensus about the job they are doing at Google. It is
also important that this exercise yields an expected spread of ratings, with as
minimal an array of outliers as possible.
“Not to force a single distribution (of ratings) but to understand why
some teams might have different distributions,” notes Bock. As the cumulation of ratings across the
different departments and teams roll up to management and the very top of the
company, these calibrated results end up actually being a tool of
credibility. “If you go in at 70% (of
your employees) exceeds (expectations), you’re going to lose credibility,” says
Dave P. On the flip side, that same
credibility that is built through the calibration process also gives managers the
footing to make arguments that their team members did in in fact “over-index”
(a word used again and again at Google) in that review period. “In some (review) cycles,” Dave P. asserts, “…the
work they did was so good that I can justify it. You can’t do that unless you build
credibility along the way, that you are grading fairly across the
process.”
A major goal of calibration is to
counteract bias. At the start of each calibration
session, Google managers are presented with a handout outlining the various
kinds of biases to be cognizant of. In most sessions, according to Dave P., a
“people person” will join the meeting and actually walk the participants through
the bias information. According to an
excerpt of a sample handout found in Bock’s book, the kinds of biases Google managers
are presented with are Horns & Halo Effects, Recency Effect, Fundamental
Attribution Error, Central Tendency, and Availability Bias. “If you’ve been at this a while, you just
kind of know it,” says Dave P., of the bias refresh at the beginning of
calibration settings. “But it just
re-calibrates (everything) at the beginning of it (the session). You’re pausing for a second. You go into it fresh. It clears your mind a
bit, so you don’t have any (preconceived notions).”
Another performance
management hallmark at Google is splitting the review process in two so that
the development conversation is held at a different time than the reward
conversation. Typically, annual reviews
happen in November, and pay discussions in December. This is done because Google wants to counteract
the potentially undermining effects that extrinsic motivation – rewards,
compensation, promotions – can place on intrinsic motivations – development,
learning, and self-actualization. The underpinning for this line of thinking
comes from Self-Determination Theory, devised by two University of Rochester psychologists,
Richard M. Ryan and Edward L. Deci. “If
you want people to grow,” says Bock, “don’t have those two conversations at the
same time.” Dave P. says it can still be
a challenge to always separate the two, but it is integral to make sure the
conversations are split. “Part of the
process is sometimes difficult because you are separating how they did against
certain metrics and results that they were mapped to, versus career planning,
versus compensation and things like that.
I think it’s the tendency of some people to try to conflate it all. I try to keep it as bifurcated as possible.”
Peer
Feedback rounds out the performance management process at Google. Annually, Google employees receive feedback
not just from their managers, but also from those they work with and even those
junior to them. These peer reviewers are
selected by the employee and their manager.
Dave P. says that within the biannual reviews, one is more important
than the other and that you and your manager can choose the one that is going
to be “a light cycle.” He still feels
that you should take both just as seriously, though, especially when it comes
to peer reviews. “You should do the
right peer reviews,” he says. “Picking peers who are not necessarily going to
say great things about you, that can give an honest evaluation. They may say great things, but you’re not
cherry picking who is going to give your evaluations.” Managers are also evaluated during this
period, anonymously. Dave P. has found
these useful for ensuring that there are no “blind spots” in how he thinks he
is managing his team.
As an individual who gives many
performance reviews, it was interesting to hear what it is like for Dave P. to
receive his own reviews and appraisals. He
says that he uses them as a chance to reflect and take a pause. “I always try to listen because the tendency is
- when you are getting feedback that is not exactly what you want - to
interrupt and provide context or excuses…I appreciate when people take the
feedback, they’re not whining about it, they want to get better as a result of
it. I should be the same way. I shouldn’t be defensive. I should be open to it. I should acknowledge where I have gaps. I should acknowledge stuff up front.”
Dave P. also says that developing
and honing emotional intelligence has been necessary for his path at
Google. First, in being able to humbly
acknowledge his deficiencies when looking at himself. But, most importantly, in how he works with
and manages his teams. Google bases much
of its performance management of managers and senior managers on how they
develop their people. “If you are not
developing your people,” says Dave P., “and you are still all in it for
yourself, in the end that shows… There’s got to be a piece of this that you
really care about your people and what happens to them.”
Promotions
at Google happen in much the same way as everything that has been discussed heretofore:
committees decide, based on calibrating against previous year’s promotions and clearly
articulated standards. Google also
allows employees to nominate themselves for promotion.
Google has
a culture all its own, a strong analytical foundation, and an obvious care
inherent in its processes. This is witnessed
in how the company handles performance management, especially its focus on goal
setting, performance measurement, peer feedback, calibration, and splitting rewards
and development. It is further bolstered
by infusing bias awareness, emotional intelligence, and culture development
into the mix. Its performance management
acumen should stand as an exemplar in the field.
No comments:
Post a Comment