Broad men and narrow women: the perils of soundbite science

Last week a few people asked me what I made of a new study that was generating some interest on social media. At the time I hadn’t read it: I only knew Nature had reported it under a headline–‘Male researchers’ “vague” language more likely to win grants’–that made it sound both baffling (why would scientists get points for being vague?) and infuriating (as usual, it seemed to be men who were benefiting and women who were losing out). So I decided to investigate further, and then share my conclusions in this post.

The study was conducted by researchers at the US National Bureau of Economic Research (NBER), and their write-up is available as an NBER Working Paper. The data they analysed consisted of 6794 grant applications submitted to the Bill and Melinda Gates Foundation, which operates a policy of anonymous reviewing. Because reviewers weren’t told whether applicants were men or women, the researchers assumed that any gender differences in success rates could not be the result of direct discrimination. Whatever was leading reviewers to favour men must be contained in the application itself. And since most of a grant application consists of words, they decided to look for gender-differentiated patterns of word-use.

What their analysis revealed was a tendency for reviewers to give higher scores to applications that contained ‘broad’ words and lower scores to those that used ‘narrow’ words. Since broad words were used more frequently in men’s proposals, while narrow words appeared more often in women’s, this preference for broad over narrow words was also a preference for male- over female-authored applications. The researchers found no reason to think that broad words were associated with better proposals. When they looked at what applicants had gone on to achieve, the words used in their proposals appeared to be a poor predictor of research quality. Overall, then, the study’s conclusion was as infuriating as the Nature headline suggested: men whose research was objectively no better than women’s were receiving more funding from the Gates Foundation because reviewers preferred a particular style of grant writing.

One question the researchers didn’t attempt to answer was why men and women writing grant proposals might favour, respectively, ‘broad’ and ‘narrow’ words. But many people who commented on their findings thought the answer was obvious: simply and bluntly put, men–or at least a higher proportion of men–are bullshitters. Whereas women offer specific, realistic accounts of what they think their research can deliver, men have fewer inhibitions about making sweeping, grandiose claims.

This take is an example of a common interpretive strategy. If you present people with a generalization about language and gender—especially one whose significance isn’t immediately obvious—they will often try to make sense of it by invoking some other, more generic gender stereotype. In this case what they did was map the alleged linguistic difference (‘men use broad words, women use narrow words’) onto a higher-level, more familiar male-female opposition: ‘men are over-confident, women are over-cautious’.

You might ask: what’s wrong with that? Stereotypes aren’t always false: there’s plenty of other research you could cite in support of the thesis that men are over-confident (for instance, experimental studies showing that male test-takers consistently overestimate how well they’ve done, or the fact that men are more likely than women to apply for jobs when they don’t meet the advertised criteria). I don’t dispute any of that: in fact, I agree that ‘men are over-confident and women are over-cautious’ captures a real and significant cultural tendency. But there are, nevertheless, some problems with using it to explain the findings of this study.

One general problem is that you can use the same interpretive strategy to explain pretty much any set of findings, including made up ones. Suppose I told you the study had found that men use narrow words and women use broad words (i.e., the opposite of what it actually found). You’d be able to come up with an equally plausible explanation for that (non) finding just by switching to a different gender stereotype. Instead of ‘men use broad words because they’re overconfident bullshitters’ you might suggest that ‘women use broad words because they’re more attuned to their readers’ needs’; or ‘men use narrow words to show off their expert knowledge’. Since the supply of gender stereotypes is inexhaustible, there’s no statement of the form ‘men do x and women do y’ that can’t be slotted into this explanatory frame.

In the case of the NBER study, though, there’s a more specific problem with explaining men’s use of broad words as a linguistic manifestation of their over-confidence. When the researchers use the terms ‘broad’ and ‘narrow’, they don’t mean what people have assumed they mean (i.e., what the words would mean in ordinary English).

By way of illustration, here’s a list of six words taken from the study: three of them were classified as ‘broad’ and the other three as ‘narrow’. Which do you think are which?

  1. bacteria
  2. brain
  3. community
  4. detection
  5. health
  6. therapy

My guess is that you defined words as ‘broad’ if they were just basic, everyday vocabulary, and ‘narrow’ if they were a bit more abstract and technical. On that basis you probably categorised ‘health, ‘brain’ and ‘community’ as broad and ‘bacteria’, ‘detection’ and ‘therapy’ as narrow. That wasn’t, however, what the researchers did. Their definition wasn’t based on the characteristics of the words themselves, but on their frequency and distribution in the sample. Broad words were those that occurred in proposals on a wide range of different research topics; narrow words were restricted to proposals on a particular topic. By those criteria, ‘bacteria’, ‘detection’ and ‘therapy’ were broad, whereas ‘brain, community’ and ‘health’ were narrow.

If you think these definitions are confusing, I agree: the researchers might have done better to choose a different pair of terms (like, say, ‘core words’ and ‘peripheral words’). But once you’ve understood how they made their broad/narrow distinction and looked at the words in each category, it becomes difficult to argue that what’s behind the gender difference is men’s propensity for writing grandiose bullshit and women’s dogged attention to detail. (Is ‘health’ more precise than ‘bacteria’? Is ‘therapy’ vaguer than ‘brain’, or more grandiose than ‘community’?)

The fact that so much discussion revolved around the question of explanation suggests that most people had simply accepted the findings themselves at face value. This always bothers me: in my view, any claim that men use language in one way and women use it in another should be approached with a degree of scepticism. And that’s especially true if what you’re basing your assessment on is a report in the media. For obvious reasons, the media pay most attention to studies whose findings will make an eye-catching headline or a killer soundbite; this means they have a bias towards research which makes bold rather than cautious claims (stories like ‘men and women fairly similar, study shows’, or ‘we looked, but we didn’t find anything’, are not exactly clickbait). But for feminist sceptics it’s always worth asking whether the finding everyone’s talking about is supported by any other evidence. Have other researchers found the same thing? Or have they asked similar questions and come up with different answers?

There is, in fact, other research investigating the influence of writing style on grant decisions. Earlier this year, the Journal of Language and Social Psychology published an analysis of the language used in a sample of nearly 20,000 abstracts taken from research proposals submitted to the US National Science Foundation. This study considered only successful applications, taking the amount of funding applicants had been awarded as a measure of how positively their proposals had been assessed. It found there was a relationship between the funding researchers received and the language used in their proposal abstracts, but the linguistic features which made a difference were not the same ones the NBER study identified. The NSF gave more money to applicants whose abstracts were longer than average, contained fewer common words, and were written with ‘more verbal certainty’.

But I’m not just lamenting the uncritical reception of the NBER findings on general scientific principles. It also bothers me because I know how easy it is to propagate myths about the way men and women use language. ‘Men use broad words and women use narrow words’ is exactly the sort of thing that gets mythologized–detached from its original context (a study in which, as I’ve already pointed out, it meant something completely different from what most people thought) and repeated without elaboration in dozens of other sources, until eventually it turns into one of those zombie facts–like ‘Eskimos have a lot of words for snow’, or ‘women utter three times as many words per day as men’–that refuse to die no matter how many times they’re debunked.

If it does become part of our collective folk-wisdom on this subject, there’s every chance that ‘men use broad words, women use narrow words’ will also be filtered through the kind of deficit thinking which sees whatever women do with language as a problem in need of remedial intervention. Using ‘narrow’ words could join over-apologizing, hedging and tilting your head on the list of bad habits which are said to hold women back, and which it then becomes women’s responsibility to fix. (I can already imagine the TED talks exhorting women to ‘think broad’, and the workshops for female grant applicants on ‘choosing the right words’.)

To be fair to the authors of the NBER study, that isn’t what they think should happen. As they see it, it’s the reviewers who need training: their bias towards certain ways of writing elevates style over substance and leads to less than optimal funding decisions. But it’s hard for researchers to control what people make of, or what they do with,  findings that have entered the public domain. Even a study that was intended to be part of the solution can end up becoming part of the problem.

This is a dilemma for everyone who researches or writes about language and gender, myself included. Whenever I criticise some questionable claim or mistaken belief, I’m aware that I could be amplifying it just by giving it airtime. Though I’m only repeating it to explain the arguments against it, those arguments won’t necessarily be what people take away. But as you’ll have noticed, that hasn’t caused me to retreat into silence. I do believe that knowledge can set us free–but only if we’re willing to interrogate it critically.

 

Advertisements