Men and women biased about studies of STEM gender bias – in opposite directions
David Miller, Northwestern University
In 2012, an experiment on gender bias shook the scientific community by showing that science faculty favor male college graduates over equally qualified women applying for lab manager positions. Though the study was rigorous, many didn’t believe it.
“This report is JUNK science. There is no data here,” said one online commenter. Others justified the bias saying, “In every competitive situation, with a few exceptions, the women I worked with were NOT competent.”
Now, a study published in Proceedings of National Academy of Sciences (PNAS) provides crucial clues about why some people were critical of the original finding – and other studies that have followed. The new study’s authors reasoned that men especially might devalue the evidence because it threatens the legitimacy of their status in science, technology, engineering and mathematics (STEM) fields. Men might also be critical because of prior beliefs that gender bias is not a problem in STEM.
Assessing the original study
To test these ideas, the researchers recruited 205 people from the general public and 205 Montana State University tenure-track faculty. These participants read and then evaluated the abstract of the now-famous 2012 study also published in PNAS.
The abstract noted that
In a randomized double-blind study (n = 127), science faculty from research-intensive universities rated the application materials of a student — who was randomly assigned either a male or female name — for a laboratory manager position. Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant.
Men rated the research quality of the abstract less favorably than did women in both samples. This gender gap was especially large for STEM faculty, potentially suggesting that evidence of bias might threaten men in STEM seeking to retain their status.
When reading these results, a male scientist might think, “oh my gosh…if we’re going to fix this equality issue, that almost necessarily means that there’s going to be fewer opportunities for men,” said Ian Handley, lead author of the new PNAS paper and associate professor of psychology at Montana State. Handley suggested that discounting evidence more likely reflects a subtle, unconscious process than overt sexism.
The researchers also tested for gender bias toward the abstract’s authors. Participants were randomly assigned to read an abstract identifying the first author’s first name as either “Karen” or “Brian.” Either way, “Karen’s” and “Brian’s” research were overall evaluated the same. In other words, the first author’s perceived gender didn’t affect what participants thought of the research itself.
This lack of author gender bias replicates prior research. Both experimental and real-world data typically show little to no gender bias in peer review. However, notable exceptions are sometimes found.
This evidence about mostly gender-fair peer review is encouraging. But men, especially those in STEM, are still overall more reluctant to accept the evidence of bias when it does exist. This reluctance might prevent efforts to change bias because men hold the majority of top positions in STEM. In 2010, for instance, men were 65% of full professors in psychology, 76% in life science and 92% in physics.
“We can’t try to solve a problem if we don’t know it exists,” said Jessi Smith, professor of psychology at Montana State and coauthor of the new PNAS study.
Women have their own biases
A third study tested how people respond to studies finding no bias. This addition is important because some facets of academia such as peer review don’t always show bias. Researchers therefore randomly assigned 303 participants from the general public to read an abstract that either reported bias favoring men or reported no bias.
David Miller
Even though the research methods were identical across conditions, women rated the quality of the research higher when the abstract showed bias than when it didn’t. Men showed the reverse pattern. So both genders were biased, but in opposite directions.
Fairly evaluating gender bias research
The results suggest challenges in fairly evaluating gender bias research. People may unintentionally ignore evidence if it conflicts with their social identities or prior beliefs. Special care should be taken to seek disconfirming evidence. For instance, the new paper made claims about “robust gender biases documented repeatedly,” but could have also noted the vigorous scholarly debate about such claims.
The paper argues that “numerous experimental findings” provide “copious evidence” of gender bias. But studies have found mixed evidence. For instance, the paper notes an experiment showing bias against female psychology tenure-track applicants. But experiments conducted 15+ years later show opposite results. In fact, several studies show a preference for female applicants in real-world faculty searches, not just hypothetical ones.
David Miller, NRC data
These results collectively suggest some biases are weakening over time, consistent with other related evidence. For instance, the bachelor’s-to-PhD pipeline no longer leaks more women than men, as it did among college graduates in the 1970s.
This mixed literature tempers the paper’s claims about strong gender bias. But obviously, the paper’s central goal was not to systematically review literature on gender bias, but rather to present studies of reactions to evidence of bias.
Communicating controversial research with caution
Understanding how bias varies can help target action and use limited resources wisely. Nevertheless, failures to carefully communicate this nuanced research can easily unravel progress.
In 2014, for instance, Cornell University professors Wendy Williams and Stephen Ceci wrote a New York Times op-ed about their 67-page review of literature on women in academic science. The full-length review was rigorous and expansive in scope. But the op-ed was a disaster in science communication.
The NYT wrote the headline “Academic Science Isn’t Sexist,” which ignited understandable outrage. Ceci called the headline “sensationalistic” and “offensive” in an email to me. He explained the headline was inappropriate because their review, “reported some areas of sex differences (eg, tenure being harder for women in biology).”
Ceci stands by the conclusion of “largely gender-fair outcomes for professors,” but also agrees the exceptions are important. Based on the best current data, remaining challenges include sexual harassment, bias in teaching evaluations and science mentoring, and gender stereotypes about innate genius and creativity. My own research spanning 66 nations also shows robust implicit stereotypes associating science with men, even in supposedly “gender-equal” nations like Sweden. The NYT op-ed should have done more to explicitly discuss these notable problems.
The new PNAS study shows that men, on average, are less likely to believe this evidence of gender bias where it exists. And that’s a concern, considering men are the current majority of STEM professors. But it’s also a concern if the evidence of gender bias is overhyped. Overhyped claims could make these fields unattractive to women or even make people less likely to believe evidence of bias when it does exist.
Pushing the debate forward
The new study affirms we all have bias to varying degrees. So no one should feel smug for being free of bias or impugn others because of it.
In my case, I should interrogate how my identity as a gay white male liberal academic shapes my judgments. I doubt I can ever be truly free of my biases. But I can help minimize them by seeking to learn from those with different views.
Progress in science requires actively engaging in and learning from debate with others, even if we may find their views offensive. Civil discussion can be challenging with controversial topics such as gender bias. But, to flourish, the science needs the debate.
David Miller, Doctoral Student in Psychology, Northwestern University
This article was originally published on The Conversation. Read the original article.