Flawed Feedback: The Problem with Peer Reviews
May 14, 2024262 views0 comments
People leverage 360-degree feedback systems and peer evaluations for personal gain.
When it comes to performance reviews, managers have traditionally held the reins, assessing employee contributions based on their observations and insights. But this approach can be flawed, as managers may harbour biases or lack a complete picture of each team member’s efforts.
To address these limitations, some companies have adopted peer evaluations, where colleagues provide feedback on each other. Many are likely familiar with the concept of providing 360-degree feedback. This practice is popular in flat organisations such as Gitlab, Spotify and ING Bank, but has also gained traction in traditional hierarchical organisations.
While peer evaluations can provide a broader perspective and more holistic assessment of individual performance, they put individuals in a position where they both evaluate their colleagues and are evaluated by them. When peer evaluations are transparent, individuals may use them strategically to present a certain image of themselves or shape how others perceive them.
Aware of the potential transparency of peer reviews, individuals tend to adjust how they evaluate. They cannot simply give everyone glowing reviews, as they need to portray themselves as a critical evaluator with high standards. However, they also must be careful not to offend anyone, as this could lead to retaliation.
Our recent research on peer evaluations reveals that individuals on the verge of being evaluated by others carefully select the colleagues they evaluate.
People are less likely to review others when their feedback may offend someone or when their evaluation holds weight and could significantly impact the individual’s overall assessment. Instead, they choose to negatively evaluate colleagues in cases where the outcome is already obvious.
Gaming peer reviews to gain an advantage
We explored this behaviour within Wikipedia, where a transparent peer evaluation process determines which members become administrators with greater authority to restrict page edits, block users or delete pages. Members evaluate candidates based on various factors, including their past contributions and evaluations.
Our study covered 3,434 evaluation processes from 2003 to 2014, including over 187,800 evaluations from 10,660 members. We focused on three key factors: whether the member was about to be evaluated themselves, how pivotal an evaluation was (its potential impact on a candidate’s chances) and the candidate’s activity level (their participation in other evaluations). We also interviewed 24 active members of the community.
Our findings revealed that individuals facing their own upcoming evaluations tend to participate in more peer evaluations. However, they are less likely to evaluate someone when their feedback might offend them or if their review could significantly impact the candidate’s overall assessment.
One interviewee commented that many “don’t want to go against the majority. So, you tend to get herd behaviour.” Another, reflecting on the period before his nomination, explained that he often waited until he could better understand what others were thinking: “[It’s] helpful to vote later … you already see other people’s rationales.”
In general, our interviews confirmed that members were cautious of pivotal evaluations. As one remarked, “I will only put myself in a position that I’m confident of and my reasoning would be sound when I made that final decision, especially a pivotal decision that requires the highest levels of impartiality, balance, fairness and objectivity.”
However, this does not mean that members avoided providing negative evaluations altogether. We found that they minimised the risk of backlash by only evaluating inactive members. Interestingly, we found no evidence that they concentrated their positive evaluations on active peers, suggesting that they avoid negative reciprocity but do not attempt to invoke reciprocal positive evaluations.
When asked whether candidates would evaluate active members negatively, one interviewee responded, “I think they avoid conflict. I think they avoid pissing anyone off who might be influential.” We found this strategic use of peer evaluations effective, making members more likely to receive positive evaluations. Specifically, we found that candidates who behaved strategically (by doing more evaluations and avoiding negatively reviewing active candidates or participating in pivotal evaluations) significantly increased their chances of becoming an administrator.
This suggests that individuals can leverage the feedback they provide to shape their image and boost their chances of success.
Designing fair peer evaluations
While Wikipedia’s fully transparent approach has been shown to influence evaluation behaviour, other forms of transparency may have similar effects. Even in double-blind evaluations, where the identity of reviewers is concealed, individuals may still adjust their evaluations strategically, aware that their past assessments may be known by others when they are evaluated.
Even when feedback is not made public, there exists a degree of transparency in the peer review process. Informal networks within organisations facilitate the spread of information, rumours and gossip, making it challenging to maintain complete anonymity. For instance, a colleague overseeing the evaluation process may share gossip about how one person evaluated another, and this information can circulate quickly. As long as there is some degree of transparency, whether intentional or not, individuals may feel compelled to tailor their evaluations to protect their own reputation.
However, transparency can also have positive consequences. It can increase engagement and allow colleagues to monitor one another, potentially detecting dishonest behaviour. In the case of Wikipedia, the transparent evaluation process inspired members to consider their assessments and justify their decisions carefully. Moreover, if members perceived an evaluation as unfair, they could directly address the evaluator to discuss the issue.
While transparency can lead to strategic behaviour and potential manipulation, it can also promote accountability and fairness within organisations. Whether to implement transparent peer reviews ultimately depends on the specific context and goals of the organisation.
Organisations need to recognise that peer evaluations are not just mechanisms for providing honest feedback; they are also platforms for individuals to position themselves and exert influence. By acknowledging this strategic aspect, organisations can implement safeguards to mitigate biases, encourage constructive feedback and promote a culture of accountability.