Copyright 1996 AXIS Performance Advisors, Inc.
In late 1995 and early 1996, AXIS conducted a survey to discover how organizations were handling “performance appraisal” in self-directed teams. It was not easy to find respondents, for many organizations have not gotten to this point in their development of teams. Through the assistance of the Association for Quality and Participation, we were able to find over40 qualified organizations from across the US and Canada to tell us what they are doing and what they have learned. This report summarizes our findings.Since the sample size is small, we caution against taking this research as the last word, but the results ring true in our experience.
Peer review is a process through which team members give one another formal feedback on their performance. It usually replaces the traditional performance appraisal/evaluation process in team-based organizations.
Teams quickly discover that the traditional performance appraisal process(where managers sit down one-on-one with employees to rate/discuss their performance) does not work in a team setting. Some of the most troubling problems include:
Unfortunately, what we have seen many organizations do is ask every team member to complete an appraisal of every other employee. This does correct one of the problems; now team members have input where managers only used to tread. However, this multiplies the other problems.
Let’s just look at the issue of time and do the math. The average traditional performance appraisal takes five hours of a manager’s time per employee.
5 Hrs. X 20 Employees = 100 hours
5 Hrs. X 20 Employees X 20 Employees = 2000 hours
That’s an entire person-year, just to do performance appraisals! Now imagine trying to get people to do this process several times a year. Noway! The process is driving the frequency instead of the need.
So what does work? Here’s what we learned….
We surveyed over 40 organizations from across the United States and Canada.We asked them to indicate which attributes described their peer review process and then to tell us whether those attributes were effective or ineffective.The attributes are listed in the chart from those with the highest satisfaction ratings to the lowest.
The majority of organizations (57%) did peer reviews once per year. Seventeen percent did it twice a year. The range was less than once a year to as many as five times per year. [Comment: We believe that in most environments,people need feedback 3-4 times per year just to keep up with changing priorities and the required pace of improvements. It seems that many organizations have not yet solved the frequency problem.]
We also asked, “How long after beginning teams did you implement peer reviews?” and was that too soon, too late, or just right. For those who responded “just right,” the average time they waited was 1.7 years, although responses ranged from six months to 3 years.
We wanted to find out if the organizations had reduced the time it takes to provide this feedback, so we asked how long it took to complete a peer review on one team member. The responses ranged so dramatically that we do not trust the data. Because the peer review systems varied widely with some focusing on individuals and others on teams, it would have been easy to misinterpret the question. We need to ask the question in another way to get more reliable answers.
Sources of Feedback—The best sources of feedback appear to be, in this order: customer, team members, and manager last. The highest satisfaction rating (100%) involved making customers an integral part of the peer review process (beyond just completing a customer satisfaction survey). Getting feedback from all team members was better than getting it from a subset of the team. Manager feedback was also viewed as important.
Form—Almost all organizations (94%) used a standard form, but those that allowed for customization were happier with the results.
Ratings—If there is any rating of individuals, it should be based on objective standards, not subjective ones. (Subjective standards include such attributes as teamwork, initiative, etc.) However, having absolutely no rating or ranking of individuals received a high 83% satisfaction rating.
Process—Doing peer reviews in a team setting received a much higher satisfaction score (80%) versus doing it in a one-on-one setting (50%).Focusing on team performance was viewed somewhat better than individual performance. (Most did both.)
Replace Performance Appraisal—A small majority let peer review replace the traditional appraisal system and they were somewhat more satisfied (67%)than those who let the peer review process supplement it (56%).
Performance Coach—Giving people the ability to pick a performance coach, someone from whom to receive feedback, was viewed positively whereas not having control over who gives you feedback was viewed quite negatively.
Ranking—Ranking employees (from best to worst) was the only attribute that received a negative average, being viewed by those who did it as a harmful practice. Some who didn’t use ranking wrote in, “Don’t do this!”
Link to Individual Pay—Linking the peer review process to individual merit pay received very low satisfaction scores.
The compensation issue is still muddled. Linking pay to individual and team performance received low satisfaction scores (27% and 50% respectively). But not linking to pay also received a low rating (33%). I interpret this to mean that those who have linked peer review to pay are seeing negative unintended consequences and those who have not linked their system to pay(who may not be aware of these negative consequences) think that a link to compensation would help their system.
Here is a sampling of narrative comments describing their lessons learned:
We asked respondents to tell us what they did to prepare team members and to support the process. Helping teams establish ground rules for the peer review process received the highest satisfaction rating (96%). Using a neutral facilitator to run the meeting was also viewed very positively. Training on the following skills received satisfaction ratings above 50%:
We were surprised to find low satisfaction scores on collecting team
performance data as well as training on conflict management and legal issues.
Resources respondents would recommend to you: