Pride, prejudice and pedantry

Last year I discovered the perfect gift for the supercilious arse in your life: a mug emblazoned with the legend ‘I am silently correcting your grammar’. grammar-mug The existence of this item testifies to the widely-held belief that sneering at other people’s language-use is not just acceptable, it’s actually a virtue. When the subject is language, you can take pride in being a snob; you can even display your exquisite sensitivity by comparing yourself to a genocidal fascist (‘I’m a bit of a grammar Nazi: I can’t bear it when people use language incorrectly’).

On Twitter there’s a ‘Grammar Police’ bot whose mission is to belittle random strangers by tweeting unsolicited corrections of their ‘defective grammar’. Because, according to its profile, ‘publishing defective grammar abases oneself’.

‘Abases *oneself*’? Try ‘one’, or better, ‘you’. And maybe get your thesaurus out, because I don’t think ‘abase’ is the word you want.

What I’ve just done is an example of what I’m going to take issue with in this post: criticising the way someone has (mis)used language as a proxy for challenging their actual message. This strategy has featured prominently in critical commentary on Donald Trump: he’s been lambasted as often for his limited vocabulary, fractured syntax and inability to spell ‘hereby’ as he has for his bigotry, dishonesty and megalomania. Linguistically speaking, a lot of this commentary is wide of the mark (for a more illuminating take on Trump’s speech-style,  try this). But the strategy was common long before Trump came on the scene. One of the first things I noticed when I joined Twitter in 2014 was how often liberal progressive types used the grammar-sneer to call out bigots. Like this*:

We should round all you feminazi’s up and put you on an island away from society.

we’ll be moving on to punctuation later this afternoon.

And this:

As a straight male how would u feel about yr child having a homosexual school teacher?! Who their around for 8hrs of the day?

If a gay teacher teaches my child the difference between they’re, their and there, I’m good.

The conflict that accompanied last year’s EU referendum produced a bumper crop of examples like this:

Britain was once a proud nation, but is now afraid to speak it’s own name.

and restore our ancient birthright of putting apostrophes where they don’t belong!

In the wake of the referendum, which the Leave side won, there was an upsurge of public racism and xenophobia—threats, vandalism, harassment, verbal abuse and violence targeting people perceived as ‘foreign’.  Facebook pages were set up where people could report incidents they’d experienced or observed. A number of these reports followed the same formula: first they described a racist white Briton telling a non-white or non-British person to ‘start packing’ or ‘go home’, and then they commented that the racist couldn’t even speak English properly. One writer reported that she’d stood up to a white woman who harangued her in a shop, by telling her, among other things, that ‘I speak better English than you’. She explained that she’d heard the white woman speaking to someone else, and noticed that ‘her grammar was appalling’.

I’m not going to blame someone in this situation for defending herself with whatever weapons are to hand. My question is why claiming to speak better English than your adversary is so often a weapon people reach for. Why does it seem more apt, and less crass, than (for instance) ‘I’m better looking than you’ or ‘I’ve got more money than you’?  Maybe it’s because it chimes with the idea that bigots are ignorant and stupid. It allows their critics to feel intellectually and culturally as well as morally superior (‘I’d hate my child to be educated by a gay teacher’. ‘Pity no one bothered educating you. Gotcha’). But however satisfying that may be, it raises the question of whether you can claim the moral high ground by using one unjust prejudice against another.

If you describe someone you’ve heard speaking in a shop as using ‘appalling’ grammar, the only thing you can mean is that s/he speaks a nonstandard dialect. In Britain, speaking a nonstandard dialect generally means that (a) you grew up working class and (b) you didn’t spend enough quality time in formal education for your native dialect to be replaced in everyday speech by the more prestigious dialect of the middle class (though you’ll use that dialect when you write, and you’ll certainly be able to read it). So, criticising a racist’s nonstandard grammar is mobilising one form of privilege (based on class and/or education) to attack another (based on whiteness). As I said before, I’m not going to blame the person who uses this tactic in self-defence. But that doesn’t mean I have to applaud the tactic.

Maybe you’re thinking: ‘but what you linguists call “nonstandard” is actually just bad English. Criticising that isn’t snobbery: anyone who goes to school for long enough to learn to read and write can learn what the correct forms are. If they haven’t learnt, it means they’re lazy. Plenty of working class people speak correctly: it’s an insult to suggest that bad grammar is good enough for them’.

Sorry, but no. Nonstandard English is not ‘bad’ by any objective criterion; it’s stigmatised because the people who use it have lower social status than the people who don’t. The actual linguistic forms used by nonstandard speakers (like, say, ‘we was’ instead of ‘we were’ or ‘she done it’ rather than ‘she did it’) are neither better nor worse than the forms we judge ‘correct’. The judgment is based on what class of person uses a particular form, and the form’s status can change as its class associations do. A hundred years ago, for instance, saying ‘aint’ was associated with upper-class Brits like Winston Churchill and the fictional Lord Peter Wimsey. Today it’s strictly for the lower orders, and it’s also become one of the most stigmatised of all English grammatical forms.

grammarpoliceAs for the apostrophe fetish (‘its’ and ‘it’s’, or ‘they’re’ versus ‘their’), that’s got nothing to do with grammar. The English apostrophe does mark grammatical distinctions, but the reason people make mistakes isn’t that they don’t know the difference between possessive pronouns and contracted verb forms: what they don’t know is which spelling goes with which form. The possessive form of nouns has an apostrophe (as in ‘the dog’s bowl’), so people often reason that the possessive pronoun ‘its’ should logically have one too. It’s also easy to pick the wrong option when writing in haste or on autopilot. On this one I’m with Jesus: ‘let anyone who is without sin cast the first stone’.

But there are other reasons for feminists (and other defenders of equality and social justice) to think twice before mocking a political opponent’s ‘incorrect’ use of language. Here are a few of them.

1. It’s a red herring

Earlier I mocked the creator of the Grammar Police bot for using ‘oneself’ incorrectly. It was a fine display of my superior linguistic knowledge, but it also completely missed the point. My quarrel with the bot-maker isn’t that he corrects other people’s grammar when his own is nothing to shout about. It’s that correcting strangers’ grammar in public is a shitty thing to do.

The same problem arises with the political examples I took from Twitter. In no case does the response engage directly with the tweeter’s prejudice. It says, in effect, ‘this mistake tells me you’re stupid, and if you’re stupid I can just dismiss your argument, which is also, by extension, stupid’. And the argument may indeed be stupid, but it wouldn’t be any less stupid if it were spelled correctly (just as Hitler wasn’t any less fascist because he could write a coherent sentence). Conversely, deviations from standard usage do not make a true fact less true or a just argument less just. The moral status of what someone says is about the content, not the grammar.

2. It cuts more than one way

On this blog I have complained frequently about the policing of women’s language, arguing that there’s no linguistic justification for the criticisms people make of uptalk and vocal fry, hedging, apologising, etc. What’s behind this is common or garden sexism: if a way of speaking is associated (accurately or otherwise) with women, it’s judged inferior to the male alternative. Not because it objectively is inferior, but just because women are the lower status group.

Judgments on nonstandard language work in exactly the same way, the difference being that the relevant status hierarchy is based on class and education rather than gender.  So, when feminists engage in grammar policing they’re undermining their own objection to the gendered equivalent. If you dismiss someone’s argument because of a misplaced apostrophe, what do you say to the people who claim they can’t take women seriously because of their ‘shrill’ voices and annoying ‘verbal tics’?

3. It’s a vote for the status quo

People sometimes say: ‘OK, I get that what’s “correct” is arbitrary, but if you want to get your point across you have to play by the rules’. But this is not a progressive argument, because it treats ‘the rules’ as neutral rather than asking whose interests they serve. If someone defends a workplace dress-code requiring women to wear high heels as just ‘reflecting the prevailing standard for appropriate female business attire’, we don’t say, ‘oh, OK then’, we say it’s time the standard was changed.

In the case of linguistic standards, we should question why we’re so obsessed with shibboleths like ‘aint’ and ‘we was’ and the apostrophe, which say a lot about a person’s social background and education, but very little about how well they can actually communicate. Would any feminist suggest that the nonstandard grammar of the phrase attributed to Sojourner Truth, ‘and aint I a woman?’ detracts from the clarity, coherence or persuasiveness of her speech?

4. In other contexts you’d call it ‘shaming’

If you don’t think it’s acceptable to make people feel ashamed (or exploit the fact that they already feel ashamed) of their bodies, their clothes, what they eat or who they have sex with, you’re going to have to explain to me why shaming them for the way they speak or write is different.

5. Modesty becomes you

If your own grammar and spelling are 100% standard, that’s probably because you served a long apprenticeship in a series of educational institutions where, through constant practice and feedback, you acquired a set of socially-valued linguistic skills which eventually became ingrained habits. Well, good for you, but let’s not get carried away. Other people have gone through a similar process to master a craft like carpentry or hairdressing. They also take pride in their skills, but they don’t mistake them for proof of superior intelligence. They don’t come to your house and laugh at the wonky shelf you made, or stop you on the street to offer unsolicited advice on blow-drying. If they did, how would you react?  Which brings me to…

6. It’s counterproductive

This point is well made in a post Nic Subtirelu wrote in 2015 after Grammarly (a major player in the online culture of language pedantry) drew attention to the poor grammar and spelling it had found on Facebook pages for supporters of Donald Trump. grammar-crackersWhat are the angry white working class men who came out in force for Trump in 2016 going to think about liberals making fun of him because he doesn’t use big words or complicated sentence structure? Might that not reinforce their conviction that supporting Trump is striking a blow against ‘the elite’, aka snobs who look down on anyone less educated than themselves?

Maybe your answer is that you don’t care what a bunch of racists, misogynists and homophobes think. Fine, I’m not asking you to (though I do think a commitment to social justice requires you to care about the economic inequality which is clearly a factor in the rise of right-wing populism). By all means take issue with bigots–but for their politics, not their punctuation. Criticise their views, not the size of their vocabulary. Stop using their grammar as a measure of their moral worth.

Language pedantry is snobbery and snobbery is prejudice. And that, IMHO, is nothing to be proud of.

*The examples used in this post are real, but I’m not supplying links, names, handles or screenshots because I’m not trying to single these particular authors out, I’m just illustrating something that’s very common.

Advertisements

Leading questions

Scene: an ordinary suburban home where A and B are getting ready to leave for work. But A’s car keys have gone missing…

A:  You’ve seen my car keys, haven’t you?

B:  Today? No, I don’t think so.

A:  When did I mention today? Just answer the question: you’ve seen my car keys, haven’t you?

B:  OK, no.

A:  You’re quite certain of that, are you?                          

B:  Well, no, I told you I don’t think—

A:  So you have seen them, then.

B:  I’m not sure…

A:  They were on the sideboard, weren’t they?

B:  I don’t know, I didn’t notice

A:  You’re telling this household you didn’t notice the car keys on the sideboard?

B:  um—I—

A:  I put it to you that you’re lying: the keys were on the sideboard

B:  Well, I suppose they could have been, but—

A:  Were they there or not?

B:  (confused silence)

A:  It’s a simple question, B. The keys were on the sideboard, weren’t they?

(B breaks down in tears, but at that moment C rushes in to say that the keys have been found in A’s jacket pocket, along with a Twix wrapper and 74p in change)

If someone you lived with behaved like A in this (made-up) vignette, you’d probably tell them to f*** off and stop interrogating you. Such overtly hostile questioning is rare in everyday conversation, and if it does happen you’re entitled to protest. But there’s one real-life situation where you can’t just tell the questioner to stop: the cross-examination of a witness in court.

Cross-examination is the bit where a witness is questioned by the lawyer acting for the ‘other side’. If the prosecution in a burglary case calls an eye-witness who says she saw the defendant breaking into someone’s house, the defence will want to test the strength of her evidence, and if possible take issue with her version of events. Maybe she saw someone who wasn’t, in fact, the defendant; maybe she didn’t see anything at all. If her answers suggest that her original account was mistaken, dishonest or confused, that could introduce the ‘reasonable doubt’ which will get the defendant acquitted.

There’s a reason I’ve been thinking about this recently. Earlier this month, Buzzfeed published the text of a letter written by a woman who had been raped while she lay unconscious behind a dumpster. The letter was addressed to Brock Turner, the man who had been convicted of assaulting her. Parts of it were read out in court, and when Turner was sentenced to only six months in prison (a decision which is now the focus of a campaign to recall the judge responsible) its author released the full version for publication.

As many commentators have said, the letter is a powerful document, bearing eloquent witness to the impact of sexual violence on a woman’s life. But I was also struck by what it says about the language of cross-examination. The writer describes the questions put to her by Turner’s lawyer as

…invasive, aggressive, and designed to steer me off course, to contradict myself, my sister, phrased in ways to manipulate my answers.

She goes on to give an example of this manipulative phrasing:

Instead of his attorney saying, Did you notice any abrasions? He said, You didn’t notice any abrasions, right?

‘You didn’t notice any abrasions, right?’ is what lawyers call a ‘directive leading question’: its grammatical form directs the addressee to a particular, preferred answer. My car keys vignette begins with another example: ‘you’ve seen my car keys, haven’t you?’ Grammatically, this is a ‘tag question’, a statement with a question tagged onto the end which invites the addressee to confirm the truth of the statement. The preferred answer to ‘you’ve seen my car keys, haven’t you?’ is ‘yes [I have]’; if the question had been ‘you haven’t seen my car keys, have you?’ the preferred answer would be ‘no [I haven’t]’. ‘You didn’t notice any abrasions, right?’ predicts ‘no [I didn’t]’. Whether the preferred answer is ‘yes’ or ‘no’, the point is that tag questions favour one answer over others. You don’t have to give the preferred answer, but avoiding it takes more effort, and if you repeatedly withhold confirmation you may come across as evasive or obstructive.

There are other, less directive ways to ask for information. If the question were ‘have you seen my car keys?’—grammatically a yes/no question rather than a tag question—it would still be ‘leading’ in the legal sense, because it presupposes that there are some car keys which the addressee either has or hasn’t seen. A non-leading question would be something like ‘what did you see?’ (not very likely in the lost car keys scenario, but a reasonable thing to ask someone who claims they witnessed a crime.) But ‘have you seen my car keys’ and ‘did you notice any abrasions’  are not directive leading questions, because the linguistic form does not imply that one answer is preferable to the other.

Last year, the forensic psychologist Jacqueline Wheatcroft called for directive leading questions like ‘you’ve seen my car keys, haven’t you?’ to be banned in court proceedings.  She expressed particular concern about their use in rape and sexual assault trials. These cases—if they get to court at all—often turn on which of two competing accounts the jury believes. In that situation the main prosecution witness will be the complainant, and it’s likely that the defence’s cross-examination will focus on trying to discredit her account. Directive leading questions are commonly employed for that purpose, and this can make testifying in court even more traumatic for victims.

As an example Wheatcroft cites the case of Frances Andrade, who committed suicide in 2013 after giving evidence against her former teacher Michael Brewer at his trial in Manchester (he was subsequently found guilty of indecently assaulting her, and sentenced to six years in prison). One of the questions put to Ms Andrade during cross-examination was: ‘utter fantasy, is it not?’ She was repeatedly presented as a liar and a fantasist, an experience which she described to several people as feeling like another assault.

The standard response to this kind of concern is that yes, trials can be horrible for victims, but people accused of serious crimes are entitled to a defence: robust questioning is necessary to test the strength of the case against them. So it’s interesting that Jacqueline Wheatcroft’s argument against directive leading questions isn’t just about their negative effect on the victim. Her research suggests that directive leading questions can undermine the larger aim of delivering justice, because they make it more likely that people will give factually inaccurate answers.

Wheatcroft and her colleague Sarah Wood conducted a study in which 80 subjects watched a four-minute video clip, and then answered a series of questions (orally, to simulate courtroom conditions) about the events they had seen in the video (it showed a reconstruction of a real crime, where a man followed a young woman home and then entered her house). All the questions were of the ‘leading’ type, and required a simple yes or no answer, but the subjects were split into two groups, with one group responding to non-directive questions like ‘was the street called Willow Street?’ while the other half were asked directive leading questions like ‘the street was called Willow Street, wasn’t it?’

The study found that the non-directive questions elicited a higher percentage of accurate answers. Although the experimental setting was presumably less stressful than an actual cross-examination in court, the subjects were still susceptible to the pressure a directive question exerts to accept its embedded presuppositions, even if they misrepresent reality.

Some directive questions are especially confusing because they embed more than one potentially disputable presupposition. An example in my ‘car keys’ drama is ‘so, you’re telling this household you didn’t notice the car keys on the sideboard?’ This (a) presupposes that the car keys were on the sideboard (rather than somewhere else) and (b) asserts that the addressee, B, must have noticed them. While B debates which of these propositions to challenge, she becomes noticeably hesitant, allowing A to jump in with an interpretation of her hesitancy as a sign that she isn’t being honest.

Most people don’t realise that the form of a question can affect their ability to give an accurate answer. Wheatcroft and Wood asked their research subjects to rate their confidence in each answer they gave on a scale from ‘not at all confident’ to ‘absolutely certain’. On this measure there was very little difference between the non-directive and directive questions, although objectively the directive questions elicited significantly more inaccurate answers.

One way to address this issue is through witness preparation: explaining to witnesses before a trial what kinds of questions they are likely to face, providing concrete examples and possibly using role-play to give a witness practice in responding. Wheatcroft and Wood’s study tested the usefulness of a number of witness preparation strategies. They split both their participant-groups into four subgroups: one was a control group, receiving no special preparation, while the others were prepared in different degrees of detail. One group was warned in general terms that the experimenters might use leading questions, another was presented with examples of what to look out for, and a third was told they could ask for questions to be repeated or rephrased.

Though one of these strategies (giving examples) appeared to work better than the others, its effect was still quite limited: all groups remained more likely to give factually wrong answers if the form of a question was directive. As the researchers point out, that isn’t necessarily an argument against witness preparation, which may help witnesses in other ways (by making them feel less anxious, for example). But preparation does not solve the problem of inaccurate testimony. As the researchers sum up their conclusions:

Where directive leading questions are incorporated into cross-examination procedure… a witness’s overall accuracy will be reduced regardless of the type of preparation the witness receives.

This study challenges the belief that ‘robust’ questioning is justified by the need to test the evidence rigorously. There’s nothing rigorous about questioning people in a way that confuses them and prompts them to make mistakes. But if we’re interested in the specific issues that arise in sexual assault trials, it seems clear that we can’t just focus on the linguistic form of the questions put to complainants. Challenging the assumptions of a particular question isn’t easy; but what’s even harder is challenging the more general assumption that women are ‘liars and fantasists’.

It’s because of that general assumption that complainants are routinely faced with questions like the one put to Frances Andrade—‘utter fantasy, is it not?’ Rephrasing that as a non-directive question (like ‘is this a fantasy?’) would make very little difference. However it’s formulated, it’s not in any meaningful sense a test of the witness’s honesty and reliability. It’s a rhetorical device for suggesting to the jury that the witness is lying, and it exploits the widespread belief that false accusations of rape are more common than rape itself.

The letter to Brock Turner includes a long list of the questions the writer was asked by Turner’s lawyer:

How old are you? How much do you weigh? What did you eat that day? Well what did you have for dinner? Who made dinner? Did you drink with dinner? No, not even water? When did you drink? How much did you drink? What container did you drink out of? Who gave you the drink? How much do you usually drink? Who dropped you off at this party? At what time? But where exactly? What were you wearing? Why were you going to this party? What’ d you do when you got there? Are you sure you did that? But what time did you do that? What does this text mean? Who were you texting? When did you urinate? Where did you urinate? With whom did you urinate outside? Was your phone on silent when your sister called? Do you remember silencing it? Really because on page 53 I’d like to point out that you said it was set to ring. Did you drink in college? You said you were a party animal? How many times did you black out? Did you party at frats? Are you serious with your boyfriend? Are you sexually active with him? When did you start dating? Would you ever cheat? Do you have a history of cheating? What do you mean when you said you wanted to reward him? Do you remember what time you woke up? Were you wearing your cardigan? What color was your cardigan?

Grammatically speaking, these questions are a mixed bunch, and none of them are unequivocally directive. But that doesn’t mean they’re unproblematic. As the letter-writer herself commented, the lawyer’s goal in asking them was to discredit her by any means necessary:

I was pummeled with narrowed, pointed questions that dissected my personal life, love life, past life, family life, inane questions, accumulating trivial details to try and find an excuse for this guy who had me half naked before even bothering to ask for my name.

What motivates defence lawyers to ask questions like these is their understanding that we as a society are inclined to make excuses for men like Brock Turner, and conversely to blame women for provoking or deserving what is done to them. If that were not the case, questions like ‘how much do you usually drink’ and ‘are you sexually active’ (let alone ‘when did you urinate’ and ‘what color was your cardigan’) would serve no purpose.

So, while I support Jacqueline Wheatcroft’s call to ban questions whose form confuses witnesses and prompts inaccurate answers, I also support the JURIES campaign, which calls for jurors in sexual violence cases to be briefed with factual information designed to counter the myths and stereotypes we’ve all been fed throughout our lives. Our justice system is adversarial; but if its aim is to deliver justice, cases must be won by marshalling evidence, not exploiting prejudice.

The pronominal is political

‘Pronouns’, announced a writer on Mashable last year, ‘are a big deal—and rightfully so’. The writer wasn’t talking about pronouns in general, but specifically about English third person singular personal pronouns. And her point was even more specifically about the central role these pronouns play in the contemporary politics of gender identity. But today’s trans and genderqueer activists are not the first people to make pronouns a political issue. If we want to understand the present state of play, it’s useful to know something about the pronoun politics of the past.

**********

Third person singular personal pronouns have been a big deal for English-speaking feminists since the earliest organized campaigns for women’s legal and civil rights. In the 18th century, prescriptive grammarians had decreed that the masculine was ‘the worthier gender’, and that ‘he’ should be used in generic references to mixed-sex categories (‘when a child goes to school, he…’). The principle that ‘the masculine imports the feminine’ was written into British legislation by the 1850 Interpretation Act, and the same formula was subsequently adopted by many other institutions around the English-speaking world. In practice, though, ‘he’ did not always include ‘she’. When anti-feminists wanted to stop women from voting, running for office or entering the legal profession, it was not uncommon for them to argue that the law referred to voters or candidates or lawyers as ‘he’, and so rendered women ineligible.

What Wendy Martyna dubbed ‘he-man language’ was also an issue for feminists of the second wave. By the end of the 1960s generic masculine pronouns were no longer being used to deny women basic civil rights, but they were seen as part of the ideological apparatus which naturalized the treatment of men as the default humans, while women remained ‘the (second) sex’. Generic ‘he’ was not the only target of feminist campaigns against sexist language, but both the campaigners and their opponents accorded it particular symbolic significance. In 1971, a TV Guide writer complained about ‘women’s lib red-hots’ with their ‘nutty pronouns’.

The linguist Robin Lakoff thought this focus on pronouns was misguided. In her 1975 book Language and Woman’s Place, she argued that feminists should concentrate on other targets, because ‘an attempt to change pronominal usage will be futile’.

Certain aspects of language are available to the native speaker’s conscious analysis, and others are too common, too thoroughly mixed throughout the language, for the speaker to be aware each time he [sic] uses them. It is realistic to hope to change only those linguistic uses of which speakers themselves can be made aware, as they use them. One chooses, in speaking or writing, more or less consciously and purposefully among nouns, adjectives and verbs; one does not choose among pronouns in the same way.

Whereas nouns, adjectives and verbs are ‘open’ word classes—they contain a large number of items, and it’s always possible to add new ones—pronouns, like articles and prepositions, are a ‘closed’ class, containing a finite set of items which alternate in predictable ways. They aren’t what high school teachers call ‘vocabulary words’, they’re words with essentially grammatical functions. That’s why, as Lakoff says, they don’t prompt the same ‘conscious and purposeful’ deliberation as nouns, adjectives and verbs. A native English-speaker might ponder whether the adjective she wants is, say, ‘enormous’ or ‘gigantic’, but she won’t need to think about whether the article she wants is ‘a’ or ‘the’. Asking people to change their pronoun usage is asking them to restructure part of their internalized grammatical system. And Lakoff didn’t think that was a realistic demand.

She later came to believe that she had been unduly pessimistic. In an annotated edition of Language and Woman’s Place, published in 2004 to mark the book’s 40th anniversary, she wrote:

Today, the extant choices (like pluralization, passivization, ‘he or she’) are the norm: writers who choose the ‘neutral’ ‘he’ are the ones who have explaining to do. …We are apparently more flexible, and more well-intentioned, than I believed back then.

My own view is somewhere in between. I agree with the later Lakoff that consciously modifying your grammar is not impossible if the motivation is there, but I also think the earlier Lakoff was right to point out that there are limits. In fact, some evidence suggests that the system has been more resistant to change than her later comments imply.

The language historian Anne Curzan used COHA, a historical corpus of American English, to investigate the effect of non-sexist language campaigns on pronoun use in the late 20th century. She found that the use of ‘he or she’, rather than just ‘he’, increased sharply during the 1970s and continued to rise through the 1980s and early 1990s. But by the end of the century it had begun to decline again. As I’ve noted elsewhere, virtually all the university students I teach—the majority of them born in the 1990s—use the generic masculine unselfconsciously in their writing; they don’t seem to feel they have any ‘explaining to do’.

Even at its peak, the shift to ‘he or she’ was uneven. In the COHA data it was most pronounced in academic writing, and far less evident in writing for mass audiences, or in speech. But in those contexts there was another option: so-called singular—or as I’ll call it from now on, ‘epicene’—‘they’ (in relation to language, ‘epicene’ describes a form that refers to both sexes).

When the linguist Laura Paterson looked at third-person generic references in a sample of British newspapers, she found that the balance was roughly 56% ‘they’ to 44% ‘he’. But this isn’t most plausibly explained as the result of people changing their habits because of feminist objections to generic ‘he’. Though ‘they’ was stigmatised as ‘ungrammatical’ (and therefore avoided in the most formal writing), it was common in speech, and in less formal written genres, long before pronouns were a feminist issue. In some contexts—for instance, after words like ‘any’, ‘each’ and ‘every’—it’s clearly  favoured over ‘he’ and ‘she’, even when the reference is sex-specific, as in these examples from newspapers.

Like any girlfriend with someone they care about serving on the front line, her emotions were all over the place

For any woman, waiting to hear whether or not they have breast cancer is an extremely stressful and worrying time

These examples illustrate Lakoff’s original point that we don’t usually choose our pronouns consciously. ‘She’ would be considered more ‘correct’ in both these sentences, but our decisions aren’t based on the prescriptive rules we learnt at school, they’re based on principles we worked out during the process of first language acquisition. Laura Paterson examined interactions between young children and their adult caregivers to see what input children get while they’re acquiring the English personal pronoun system. She concluded that children analyse ‘they’ in much the same way they analyse ‘you’, as both a singular and a plural form.

The fact that it’s acquired naturally gives ‘they’ an advantage over all the other epicene pronouns that English-speakers have invented over the years. The linguist Dennis Baron maintains a list of these creations going back to the 19th century. He calls the list ‘The Word that Failed’, because none of the deliberately coined items that appear on it (for instance, ‘thon’, ‘ve’, ‘se’, ‘per’, ‘na’ and ‘heesh’) has ever been widely adopted.

In 2004, Robin Lakoff also remarked on the failure of invented epicenes:

The more florid suggestions have vanished, as I thought they would, without a trace. …I was right to suggest that neologisms like ‘ve’ and its colleagues would never survive.

But since she wrote those words, invented epicenes have returned, as part of a new campaign to change third person pronoun usage. The activists spearheading this new movement do not always acknowledge (and may not even know) the history of the forms they are trying to revive. Once again, though, I think it’s instructive—as well as interesting—to look back to some of the earlier feminist debates.

**********

It isn’t entirely fair to categorize all invented epicenes as ‘words that failed’, since in many cases they were not designed to be real-world competitors for ‘he’ and ‘she’. Rather they were literary devices, used in feminist speculative and utopian fiction. ‘Na’, for instance, comes from June Arnold’s lesbian separatist novel The Cook and the Carpenter (1973). ‘Per’ is the gender-neutral pronoun used in Mattapoisett, one of the alternative future societies visited by the protagonist of Marge Piercy’s Woman on the Edge of Time (1976). In both these texts (and many more like them), invented pronouns were used to challenge both conventional ways of using language and conventional ways of thinking about gender.

One speculative fiction writer who wasn’t so keen on this strategy was Ursula Le Guin. In her 1969 novel The Left Hand of Darkness, Le Guin chose to refer to the ambisexual inhabitants of the planet Gethen as ‘he’, on the basis that ‘he’ was generic as well as masculine. Later she was persuaded by the feminist argument that ‘he’ was not a true generic: in a 1985 screen adaptation of her novel she substituted ‘a’, and in 1995, in a 25th anniversary edition, she added a version of the opening chapter rewritten with the pronoun ‘e’. But she remained ambivalent about invented pronouns, fearing that the repeated use of unfamiliar forms would ‘drive the reader mad’.

That fear also led Le Guin to reject ‘they’. As she told the linguist Anna Livia in the mid-1990s (Livia quoted their correspondence in her book about literary experiments with gendered language, Pronoun Envy),  ‘they’ might be familiar, but it was only natural-sounding when the reference was indefinite (e.g. ‘has anyone lost their phone?’); it was not a natural way to refer to a unique individual (e.g. ‘has Lee lost their phone?’)

But this is one aspect of pronoun usage that does appear to be changing. Facebook has permitted formulas like ‘Lee changed their profile picture’ for some years, and recently this use of ‘they’ has also been officially recognized by some older media institutions. At the end of 2015 the editor responsible for the Washington Post’s style guide noted that ‘they’ can be ‘useful in references to people who identify as neither male nor female’.

Which brings me back to the subject I began with—the place pronouns have come to occupy in the new politics of gender identity.

**********

Feminists objected to the use of ‘he’ to refer to people in general, which made women as a class invisible. The new politics of gender identity, by contrast, is concerned with the way pronouns are used in reference to specific individuals. As the writer I quoted earlier explains, pronouns are ‘a big deal’ because

They’re the definitive way we acknowledge and respect a person’s gender in everyday conversation.

The principle that underlies this assertion is that individuals have a right to be referred to with the pronouns which, in their own view, most appropriately reflect their gender identity. It should not be assumed that everyone is either ‘he’ or ‘she’: individuals who identify as trans, non-binary, agender or genderqueer may prefer an alternative, epicene form. ‘They’ is one of the available options, but sources which aim to document non-traditional pronoun use exhaustively, like this tumblr, list scores of other possibilities.

The acceptance of this principle has produced a new form of linguistic etiquette: announcing one’s ‘preferred pronouns’ and taking steps to ascertain the preferred pronouns of others. Some universities now invite students to register their pronouns: at Harvard around half the student body so far have availed themselves of this option (though only about 50 students out of 10,000 have specified a pronoun other than ‘he’ or ‘she’). And the New York City Human Rights Commission recently issued legal guidance which made clear that an employer or landlord who failed to use an employee or tenant’s preferred name, title and pronouns would be guilty of unlawful discrimination.

The use of preferred pronouns is often presented as a matter of basic courtesy, like using people’s actual names rather than just addressing everyone as ‘John’ or ‘Susan’. But this analogy points to a practical difficulty. If each individual is entitled to specify their own pronouns, pronouns in effect cease to be a closed class—a finite set of items which alternate in predictable ways—and become more like personal names, which have to be learnt individually. Even if the majority of non-traditional pronoun-users choose the same few forms (e.g. ‘ey’, ‘they’ and ‘ze’), it will still be necessary to memorize each person/pronoun pairing separately, because there is no rule we can use to predict an individual’s preference. That isn’t just a minor adjustment to the existing personal pronoun system. It’s a fundamental change in the way pronouns work.

Just to be clear, I’m not suggesting that English can’t accommodate a non-binary third-person singular pronoun. We know it can, because it already has one: ‘they’. The current extension of ‘they’ from indefinite/generic to specific reference is a logical development which has every chance of becoming embedded in mainstream usage, because it isn’t a huge leap from what most English-speakers already do. But the preferred pronoun principle, which requires speakers to use whatever forms a given individual specifies, is a different matter: it’s where the reservations expressed by Lakoff in 1975 become difficult to dismiss. Asking people to change their pronoun usage in a way that makes such significant demands on memory and attention will in most cases be asking too much. In other words, there’s a trade-off: if you want non-binary pronouns to become mainstream, you can’t also insist on the sovereignty of individual choice.

I’m aware that some people may find this view offensive, a denial of what they take to be the absolute right of every individual to define their own identity and have it recognized by others. But at the risk of offending those people further, I want to ask: is it actually true that pronouns are, or have to be, ‘the definitive way we acknowledge and respect a person’s gender’?

**********

It’s easy to see why monolingual English-speakers might think so. In modern English, third-person singular pronouns stand out as a rare case in which gender-marking is non-optional. But English is unusual in this respect. For speakers of most other languages, pronouns do not play a ‘definitive’ role in indexing (pointing to) a person’s gender.

In a large percentage of the world’s languages, pronouns play no role in gendering people at all, because there are no gendered pronouns equivalent to English ‘he’ and ‘she’. Rather there is a single epicene third-person pronoun referring to all humans (or sometimes, animate beings). Languages in this category include Finnish, Hungarian, Malay, spoken Mandarin, Persian, Swahili, Turkish and Yoruba. And they make clear that the social recognition of gender does not depend on the use of gender-specific pronouns. The absence of gendered pronouns has never prevented Finnish or Turkish speakers from acknowledging the existence of men and women, or from expressing identities as men and women. And there is nothing to prevent them from expressing other, less traditional gender identities.

The world’s languages also include a fairly large number that mark gender much more extensively than English does. In these languages, pronouns are not ‘the definitive way’ in which a person’s gender is acknowledged: a much more pervasive form of gender-marking is through inflections on nouns, adjectives, articles and in some cases verbs. Languages in this category include the Romance group (French, Spanish, Catalan, Portuguese, Italian, etc.), German, Slavic languages like Polish and Russian, and Semitic languages like Arabic and Hebrew.

Speakers of these languages can’t escape the gender binary just by adopting novel pronouns. In some of them it’s not too difficult to come up with an extra set of gender inflections (though that doesn’t mean it’s easy to get people to use them, since once again, this involves restructuring a system which native speakers use without conscious reflection). In Spanish, for instance, where the standard masculine and feminine inflectional endings are –o and –a, non-binary speakers have introduced parallel forms ending in –e. (There are also forms with –x, @ and other symbols, but these are either unpronounceable or not easy to deduce the pronunciation of, so they are more useful in writing than conversation.) But in other cases the adjustments required are complicated. In Slavic languages, for instance, past tense verbs are gender-marked, and nouns are marked for case as well as gender, which means you need several alternative word-endings rather than just one.

Another language where gender-marking is pervasive is Hebrew, and in this case there has been some research on the linguistic practices of genderqueer speakers. In interviews with the Israeli researcher Orit Bershtling, six of these speakers described their strategies for ‘queering Hebrew’. One of these was alternating between masculine and feminine forms for the same person in the same sentence (e.g., using a masculine subject noun with a feminine verb). Another was gender ‘doubling’, putting both masculine and feminine endings on the same word (like ‘transimot’, meaning ‘trans people’, where the word ‘trans’ is followed by two plural endings, the masculine –im and the feminine –ot). Alternatively, speakers could select forms which allowed them to avoid the issue. Sometimes, for instance, they would speak about their present activities in the future tense, because Hebrew first-person future tense forms, unlike their present tense equivalents, do not have to be marked for gender.

Bershtling was an outsider to the community she studied, and by her own account she found it extremely difficult to use the ‘noncustomary sex-marked forms’ her interviewees preferred. Some of their comments suggested that they did not find it easy themselves. They reported that it was hard for them to sustain a long conversation without making ‘errors’ (i.e., reverting to standard Hebrew gender-marking). They also acknowledged that certain strategies, like using the future tense to describe actions in the present, could cause the message to come out ‘a bit garbled’. Bershtling concluded that queering Hebrew

demands concentration and juggling, restricts self-expression and so produces silence. This silence stems from the impossible intersection between two linguistic functions: to express identity and to communicate with others.

Linguists don’t usually think of this as an ‘impossible intersection’. Language has always had the two functions Bershtling mentions, and people have generally found a workable balance between them. What’s unusual about the speakers in this study is the extent of their commitment to identity-expression, apparently at the expense of communication. But perhaps the two functions aren’t so much ‘intersecting’ as ‘intertwined’. The politics of gender identity is, in the political theorist Nancy Fraser’s terms, a ‘politics of recognition’: the central demand is that others should ‘acknowledge and respect [an individual’s] gender’. Using unconventional linguistic forms to express identity is, at the same time, a way of communicating your demand for recognition to other people. At least, that’s true if you speak Hebrew. If you speak English, the situation is rather different.

Unlike Hebrew, English requires gender-marking only on third person forms which do not express the identity of the speaker (people don’t generally talk about themselves in the third person). So, when an English-speaker says ‘my pronouns are X and Y’ or ‘I use the pronoun Z’, they aren’t really describing what they themselves do, they’re describing what they want other people to do. Which might sound a bit high-handed—until you ask yourself another question about the way pronouns work. How often, in face-to-face spoken interaction, do we use third person pronouns to refer to other participants?

I haven’t seen any proper research on this question, but recently I did try a small experiment, tracking the use of pronouns and personal names in a seminar group consisting of ten students and me. Overall, I found the most frequently-used pronouns were first person ‘I/we’ and second person ‘you’. As the person leading the discussion, I addressed individual students much more often than I referred to them. When I did refer to someone in the third person, I invariably used their name rather than a pronoun (e.g. ‘could we go back to what Ellie said?’), and then switched to ‘you’. I only used third-person pronouns when referring either to one of the academics whose research we were discussing, or to class-members who weren’t actually there (e.g. ‘we’re just waiting for Tom. Does anyone know if he’s coming?’)

I also analysed a small sample of extended, multi-contributor Facebook threads to see if there’s a similar pattern when interactions are conducted in writing rather than speech. I found that ‘you’ was much less common on Facebook, and personal names were used in a slightly different way (less to refer back to previous contributions and more to tag a particular person as the main addressee for a particular comment). But once again, all the third person pronouns I found referred to individuals who weren’t directly involved in the interaction. They included some journalists, a couple of dead philosophers, several former Eurovision song contest winners, one dog and two cats.

I don’t have enough evidence to know if this is typical of group interaction generally. But if it is, in fact, unusual to make third-person references to people who are part of the same conversation, that might suggest that the actual use of preferred pronouns is not a frequent-enough occurrence to function as ‘the definitive way we acknowledge and respect a person’s gender’. To me it seems possible that what actually does this job is the act of announcing what your pronouns are, and (in face-to-face contexts) having that announcement acknowledged by others. Like other social rituals in which people introduce themselves or greet one another, this isn’t just about exchanging information (in this case, about what pronouns people prefer and by extension how they define their gender identities). It’s a symbolic affirmation of the parties’ intention to conduct their subsequent dealings in good faith and with mutual respect.

If it’s the display of good faith that really matters, perhaps we don’t need to worry so much about the practical problems I mentioned earlier. And if we put the practicalities to one side, we can turn our attention to the politics. When we argue about pronouns, what, at a deeper level, is the argument really about?

**********

In the 21st century, the obvious answer to that question is ‘identity’. But there is usually more at stake in arguments about pronouns than just identity, especially if what you mean by that is the identities of individuals. I would say that the way personal pronouns are used both reflects, and gives concrete expression to, a community’s beliefs about personhood: what defines a person, what kinds or categories of people there are, and what status different kinds of people have in relation to one another. All of which, especially the last, are political questions. The problem first and second-wave feminists had with generic masculine pronouns was not about gender in the sense of identity, but about gender as an axis of power: the question was why ‘he’ outranked and subsumed ‘she’, and it mattered because that usage mirrored the actual social fact of women’s legal and political non-personhood.

Speculative fiction is an arena where writers can play with ideas about the politics of personhood, inviting us to reflect critically on our everyday assumptions by imagining alternative worlds. Feminists have often made gender the focus of these thought-experiments, asking questions like: what if women were the dominant sex-class? What if there were only one gender? What if there were no gender at all?  In most feminist utopias gender is less rather than more significant than it is in the non-fictional world: the invented pronouns are epicene forms like ‘na’ and ‘per’, which simply mark their referents as people.

Contemporary gender identity politics can be seen as doing something comparable, though the main arena for its thought-experiments is not fiction, but rather the online communities and social networks created by digital technology. And the ideas it explores are very different from the older feminist ones. Rather than imagining a world without gender, or one where gender is a less important aspect of personhood, what this kind of politics imagines is a world where gender is all-important and comes in infinite varieties. The pronouns are individualized rather than one-size-fits-all.

The conflict between these approaches to gender is a recurring theme in a recent work of science fiction, Ann Leckie’s novel Ancillary Justice (2013). And Leckie, like many of her predecessors, uses an unconventional pronoun-choice as a defamiliarising device. In this case, though, the unconventional pronoun is neither invented nor (for English-speakers) epicene. Rather, the novel’s narrator and main protagonist, Breq, uses ‘she’ as her default, neutral pronoun:

She was probably male, to judge from the angular mazelike patterns quilting her shirt. I wasn’t entirely certain. It wouldn’t have mattered, if I had been in Radch space. Radchaai don’t care much about gender, and the language they speak—my own first language—doesn’t mark gender in any way. The language we were speaking now did, and I could make trouble for myself if I used the wrong forms. It didn’t help that the cues meant to distinguish gender changed from place to place, sometimes radically, and rarely made much sense to me.

To Breq, the gender cues that other people treat as obvious are like an impenetrable secret code: where others see meaningful differences, she sees only similarities.

Males and females dressed, spoke, acted indistinguishably. And yet no one I’d met had ever hesitated, or guessed wrong. And they had invariably been offended when I did hesitate or guess wrong.

As a feminist of a certain kind (and vintage), I feel I have a lot in common with Breq. Like her, I understand gender as a set of externally-imposed and often arbitrary social norms. I don’t subscribe to the alternative model in which gender is an innate, essential and defining quality of individual persons.

That doesn’t mean I’m unwilling to use the pronouns an individual prefers. But I will do it as a matter of courtesy rather than conviction; and if I fail to do it, I’ll consider that an oversight rather than a crime. Pronouns may be a big deal, but they’re not a matter of life and death.

To gender or not to gender? (Thoughts prompted by the death of Zaha Hadid)

Last week, after Zaha Hadid’s death was announced, someone I know posted on Facebook: ‘It’s annoying that the coverage keeps referring to her as “the world’s most prominent female architect”. Why not “one of the world’s most prominent architects?”’

Most people who responded agreed that it was sexist to put Hadid into a subcategory of ‘female architects’ rather than acknowledging her status as one of the leading figures in contemporary architecture, period. But one person dissented, arguing that since it’s still harder for women to succeed in most professions, drawing attention to Hadid’s sex underlined rather than detracting from her achievements. This commenter also felt that highlighting women’s successes explicitly was important, because it helped to inspire other women and girls.

‘To gender or not to gender’ is a question that has also divided feminist linguists. Robin Lakoff, author of the influential early text Language and Woman’s Place, is among those who have argued that using gender-marked language has a profoundly negative effect. In 2007 she explained to William Safire (who wrote the New York Times’s language column until his death in 2009),

The use of either woman or female with terms such as ‘president, speaker, doctor, professor’ suggests that a woman holding that position is marked — in some way unnatural, and that it is natural for men to hold it (so we never say ‘male doctor,’ still less ‘man doctor’).

She went on:

Every time we say ‘woman president’, we reinforce the view that only a man can be commander in chief, symbolize the U.S. (which is metonymically Uncle Sam and not Aunt Samantha, after all), and make it harder to conceive of, and hence vote for, a woman in that role.

What Safire had actually asked her about was an old grammatical shibboleth. Pedants insist that referring to someone as a ‘woman architect/ doctor/professor’ is ungrammatical, because a noun can only be premodified by an adjective, not another noun. In their view, therefore, it should be ‘female architect/doctor/professor’. This, incidentally, is bullshit. Countless everyday English expressions are constructed on the ‘noun + noun’ model: for instance, ‘apple tree’, ‘dog collar’, ‘garden shed’ and ‘wedding ring’. Adjectives can fill the same slot, but there’s no law reserving it for their exclusive use. In any case, Lakoff derailed the ‘woman v female’ debate by declaring that the right answer was ‘neither’. Women should just be called by the same word we use for men.

But the pedants obviously didn’t get that memo: last year, when Hillary Clinton announced the start of her campaign, there was a new outbreak of handwringing about whether she should be referred to (in the event she’s elected) as a ‘woman president’ or a ‘female president’. On one side we had the usual objection that ‘woman’ is ungrammatical, while on the other we had people saying that ‘female’ was disrespectful—more appropriate for describing livestock than the leader of the free world.

What no one seemed to be asking was Lakoff’s question, why the president’s sex needs to be specified at all. True, if Clinton wins in November there will be a ton of ‘America elects its first ____ president’ stories, and someone will have to decide what to fill the blank with. But after that, we can surely just refer to her as ‘the President’. It’s not as if people are going to confuse her with all the other serving presidents of the US. Or even with her husband, a former US president. We’re talking about a nation that elected two presidents named George Bush: they ought to be able to manage without constant reminders that Hillary is the female President Clinton.

But what about the idea that there is value in drawing attention to the achievements of women as women? Some feminist linguists do favour using gender-marked language to make women’s presence in the world more visible. Even if you accept Lakoff’s argument that  referring to ‘a woman X’ rather than just ‘an X’ reinforces the perception that ‘Xs’ are prototypically men, there are reasons to doubt whether using unmarked terms does much to shift that perception. Research suggests that gender- neutral occupational labels are still typically interpreted as referring to men where the role they denote is culturally stereotyped as male (e.g. ‘lorry driver’ or ‘firefighter’). Replacing gender-specific terms with generic/inclusive ones seems not to override people’s real-world understanding of the relationship between gender and occupational status.

My own view (as usual) is that there isn’t a single, simple linguistic solution to this problem. It’s a decision I think you have to make case by case, because so much depends on the specifics of the context. And the effect will also depend on how any gender-marking is done, using what specific label.For instance, there are contexts in which I would refer to someone as ‘a woman writer’ (as well as contexts where I would simply call them ‘a writer’). But there are no contexts in which I would use the term ‘authoress’, because that word does not just convey that the writer is a woman, it also implies that her work is trivial and inferior.

The baggage that has become attached to certain words in the course of their history of being used is relevant to the great ‘woman v. female’ debate. In his column on the subject, William Safire expressed surprise and disappointment that feminists now seemed to prefer ‘woman’ to ‘female’ and ‘gender’ to ‘sex’. He put this down to a growing cultural squeamishness, describing those who have ‘turned against’ biological terms as ‘faint-hearted sociological euphemists’. Readers who know more about feminist theory than Safire did will be aware that the ‘sex/gender’ question is complicated. But in the case of ‘woman/female’ there are more straightforward reasons for preferring ‘woman’ to ‘female’–and they have little to do with squeamishness about biology.

‘Female’ is not just interchangeable with ‘woman’, as you immediately realize when you look at a corpus (a large collection of authentic examples). My own quick-and-dirty search of the 100 million-word British National Corpus turned up a crop of ‘female’ examples like these:

1. My poor Clemence was as helpless a female as you’d find in a long day’s march
2. ‘Stupid, crazy female’, was all he said as he set about bandaging it.
3. A call yesterday involved giving the chatty female at the other end one’s address.

These are typical examples of the use of ‘female’ as a noun, and they all involve a male speaker making a disparaging judgment on the individual he’s referring to. The judgments would remain disparaging if you substituted ‘woman’ for ‘female’, but to my mind they would be less unequivocally contemptuous. Whereas ‘woman’ can feature in positive as well as negative judgments, it’s hard to think of any context in which the noun ‘female’ is used to praise its referent: no one would say, for instance, ‘my late grandmother was an absolutely marvellous female’.

Does the contempt conveyed by the noun ‘female’ have anything to do with its being, as Safire suggests, more biological than sociological? In the examples I’ve just quoted there isn’t any explicit reference to biology, but in some cases the term does seem to have been chosen to foreground the issue of biological sex difference, and the motive for this may be overtly anti-feminist.

Here, for instance, is what a Texas businesswoman named Cheryl Rios posted on Facebook after Hillary Clinton announced that she was running for president:

A female shouldn’t be president. …with the hormones we have there is no way we should be able to start a war. Yes I run my own business and I love it and I am great at it BUT that is not the same as being the president, that should be left to a man, a good, strong, honorable man.

When challenged she stood by her comment, saying: ‘The president of the United States, to me, should be a man, and not a female’.

What’s striking here is the way Rios uses the non-parallel terms ‘a female’ and ‘a man’ (rather than contrasting ‘a female’ with ‘a male’ or ‘a woman’ with ‘a man’). The consistency with which she does it suggests it isn’t just a random accident. It may not be a fully conscious choice, but she has evidently chosen her words to mirror her general proposition that women, unlike men, are in thrall to their biology, and are consequently unfit to hold the highest office.

There’s nothing ‘faint-hearted’ about objecting to the label ‘female’ when it’s used in this way and for this purpose. But that doesn’t mean we have to object to all uses of it for all purposes: as always with language, it’s horses for courses. For instance, it doesn’t bother me when I read in a scientific paper that the researchers ‘recruited a balanced sample of male and female subjects’. In a discussion of sex I’d be more likely to refer to ‘the female orgasm’ than ‘the woman’s orgasm’. Conversely I’d be more likely to say ‘women’s underwear’ than ‘female underwear’ (and don’t even get me started on ‘Female Toilet’: when it comes to that phrase I am, unashamedly, a pedant. Sex is a characteristic of toilet users, not toilets themselves.)

But this discussion of the merits of competing terms does not resolve the larger question of whether it’s desirable to use any kind of gender-marking in references to women like Hillary Clinton and Zaha Hadid. Hadid herself had a view on this (one which, interestingly, seems to have changed over time). She’s been quoted as saying:

I used to not like being called a ‘woman architect’: I’m an architect, not just a woman architect. Guys used to tap me on the head and say, ‘You are okay for a girl.’ But I see the incredible amount of need from other women for reassurance that it could be done, so I don’t mind that at all.

It’s not hard to understand why successful women in heavily male-dominated fields so often say, ‘I don’t want to be judged as a woman, I want to be judged on my merits as an astronaut/conductor/ mathematician’. But the reality is that women can’t avoid being judged as women; whatever we say or do, we can’t make the world treat our sex as an irrelevance or a minor detail. And maybe we shouldn’t want it to be treated in that way. Another thing Zaha Hadid said on this subject was:

People ask, ‘what’s it like to be a woman architect?’ I say ‘I don’t know, I’ve not been a man’.

As this answer implies, sex and gender shape every individual’s life-experience: the difference between men and women isn’t that men aren’t affected by their maleness, it’s only that they are rarely asked to ponder its effects. Women, by contrast, are endlessly required to explain how their femaleness influences everything they do.

If Hadid herself declined to play this game, others were happy to play it for her, both during her life and after her death. Here, for instance, is what Bust (an online magazine that bills itself as ‘a cheeky celebration of all things female’) had to say last week:

The world became a little less whimsical today with the loss of Zaha Hadid. The Queen of Curve, who was widely regarded as the most famous living female architect in the world, passed away today at the age of 65.

It’s hard to imagine that future obituaries of male ‘starchitects’ like Richard Rogers and Renzo Piano will use words like ‘whimsical’. I chose to mention these two because they designed (among other things) the somewhat whimsical Pompidou Centre in Paris–while Hadid designed (among other things) the not-so-whimsical Maggie’s Cancer Care Centre in Kircaldy. As you’ll see from the illustration, this example of her work demonstrates her skill with straight lines and sharp angles. Nevertheless, she’s ‘The Queen of Curve’. Oddly enough, when men design curved structures, like Norman Foster’s dome over the Reichstag in Berlin, that isn’t seized on as their unique signature, nor do people routinely compare the buildings to female body parts.

‘To gender or not to gender’ remains a tricky question. In language as in life, what we need is a middle way. Women should not be defined entirely by their sex; but nor should we have to disclaim it entirely to be given whatever credit our contributions to the world deserve.

Passive aggressive

In 2014, someone set up a Twitter account called ‘Name the Agent’ as part of a feminist campaign challenging the way the media reported violence against women. Specifically, the campaign criticized the use of the passive voice in news headlines like ‘Woman raped while walking her dog’. This headline fails to mention that a man committed the crime. It presents rape either as something that ‘just happens’ to women, or as something for which women are indirectly responsible–as if the woman was raped because she was walking her dog, and not because a man decided to rape her. The campaign called on the media to abandon the passive in favour of active-voice headlines like ‘Man rapes woman dog-walker’.

Complaints about the passive have a long history. Advice to avoid it has been around for the best part of a century: I imagine many people reading this were taught at school that it was ‘bad style’. Originally the reasons for this judgment had nothing to do with politics: commentators in the 1930s said that active sentences were ‘strong’ while passive sentences were ‘weak’. The connection with politics was made by George Orwell, whose 1946 essay ‘Politics and the English Language‘ included ‘never use the passive where you can use the active’ on a list of rules for combatting the politically-motivated abuse of language. This helped to popularize the now-common idea that the passive isn’t just bad style, it’s a tool used by the powerful to conceal unpalatable truths and manipulate public opinion.

The feminist argument that passives are used to conceal men’s responsibility for violence against women belongs to this post-Orwellian tradition. But in this post I’m going to try to explain why I don’t think the argument is convincing–why it’s really not as simple as ‘active good, passive bad’.

Before I go on, let’s just run through some grammatical basics.

Below is a simple active sentence. It puts the agent—the doer of an action—in the grammatical subject position, which in English normally means before the verb.

A man attacked a woman

And here’s the passive voice equivalent:

A woman was attacked by a man

In the passive version the subject is ‘a woman’, the person affected by the action, while the agent, ‘a man’, has been relegated to a ‘by’ phrase after the verb. This ‘by’ phrase is optional. You can remove it and still end up with a grammatical sentence, like this:

A woman was attacked

This is a passive sentence with agent deletion: the attacker has disappeared, leaving the sentence to focus entirely on the woman and what happened to her. Agentless passives are common in news reports and headlines: ‘Woman raped while walking her dog’ is an example.

Agentless passives are also common in legal proceedings, and in that context the feminist argument has some force. Research has shown that men who are accused of sexual violence, and the lawyers who represent those men, very often make strategic use of what the linguist Susan Ehrlich calls ‘the grammar of non-agency’, including agentless passives. In her book Representing Rape,  Ehrlich analyses a sexual assault trial in which the defence lawyer asks his client questions like

‘I take it the sweater was removed?’

It’s not hard to see what the lawyer hopes to achieve by choosing an agentless construction that doesn’t specify who removed the sweater. If the court thinks the complainant took off her own clothes, that will support–or at least, not contradict–the defence’s argument that she consented to sex.

As Ehrlich says, it’s only to be expected that defendants and their lawyers will use this strategy. It’s more surprising, and perhaps more worrying, that the same tendency to downplay men’s agency has been observed in the language used by judges. When the researcher Linda Coates and her colleagues analysed the language used in judgments on sexual assault cases in Western Canada, they found many examples of judges using agentless passives like this:

There was advantage taken of a situation that presented itself.

This statement was made in the judgment on a case where a ten year-old girl had been sexually assaulted by a stranger in her home. The ‘situation’, in other words, was the presence of a child in her own bedroom, and it did not magically ‘present itself’, it was engineered by the defendant. A jury had found the defendant guilty, but the judge chose to minimize the seriousness of his offence by describing it in a way that implied he had no agency at all–as if he merely reacted, as anyone might, to the circumstances in which he (inexplicably) found himself.

The judge’s statement is an egregious example of ‘the grammar of non-agency’. But is the use of the passive the main problem here? I think we can see it isn’t if we recast the sentence in the active voice:

The defendant/Mr X took advantage of a situation that presented itself.

This reformulation names the agent, but it doesn’t solve the problem. The vague wording still glosses over what the defendant actually did, and the sentence still presents him as simply reacting to a situation that was not of his own making.

Naming the agent is not the same thing as holding him responsible for his actions. Conversely, not naming the agent doesn’t have to mean concealing or denying his actions.

We can see this if we go back to the newspaper headline ‘Woman raped while walking her dog’, which was criticized for failing to mention the key fact that the crime was committed by a man. It’s true that the headline doesn’t explicitly describe the perpetrator as a man. But it’s not true that the effect is to obscure his maleness from the reader. The word raped, which does appear in the headline, cues the reader to activate what psychologists call a ‘schema’—a sort of mental template for the kind of event the word is applied to. Part of that schema is the information that rapists are prototypically male. For many English-speakers rapists are male by definition, because the meaning of the word rape in their mental dictionary includes the idea of penetration with a penis. But even if they define the word more broadly, their schema will still incorporate the knowledge that rapists are almost always men. If the suspect in a rape case were female, you can be sure the report would say so, precisely because it would be so unusual.

In practice, therefore, the agent-naming headline ‘man rapes woman dog-walker’ communicates no more information than ‘woman raped while walking her dog’. The difference is only that the first version mentions the attacker’s sex explicitly while the second relies on the reader to infer it.

But if the two versions communicate the same information, why do headline writers so often favour the passive? If that’s not about excusing men and/or blaming women, what is it about?

The answer is, it’s about focus. When you choose between the active and the passive, you’re also choosing what to put in the grammatical subject position. In crude terms, you’re deciding what the sentence is about. And you don’t always want it to be about the agent. For instance, if a high-profile public figure is assassinated, the breaking news headline is more likely to be ‘President shot’ than ‘Gunman shoots president’. The story isn’t about the shooter: what makes it news is the identity of the victim.

In stories like ‘Woman raped while walking her dog’, the main news is simply that a rape has been committed. The report can’t say much about either the attacker or the victim: his identity is not yet known, while hers is legally protected. (That’s probably why the writer added the dog-walking detail—not to imply that the victim put herself at risk, but to enable readers to relate to her as an ordinary person engaged in an everyday activity.) In some circumstances the headline-writer might choose to focus on the attacker–for instance, if he’d been caught and arrested, or if the report concerned the latest attack by a serial offender. But if the attacker is just an unidentified, generic ‘man’, there’s no compelling reason to focus on him. It isn’t news to anyone that rape is committed by men.

So, I don’t think there’s a media conspiracy to deny men’s responsibility for violence by using passive-voice headlines. But as I’ve already pointed out, what actually gets communicated doesn’t depend exclusively on the intentions of the speaker or writer. It also depends on the inferences made by hearers or readers. In theory, a writer’s linguistic choices could affect readers’ interpretations even if that wasn’t the writer’s intention. Recognizing that possibility, a number of researchers have run experiments to investigate whether the grammatical framing of a report makes any difference to readers’ judgments of the case.

The basic procedure involves dividing a sample of research subjects into two groups, presenting one group with an account of sexual violence framed in the active and the other with a matched account in the passive, and then asking subjects to rate (a) the perpetrator’s degree of responsibility, (b) the victim’s degree of responsibility and (c) the degree of harm to the victim. Subjects may also be asked to complete a questionnaire about their attitudes to sexual violence, so researchers can see how their judgments relate to their pre-existing beliefs.

I’ll start with what you might call the good news. These studies suggest that we’re not dealing with a form of Orwellian thought control: readers who don’t already subscribe to rape myths are not susceptible to the influence of language. Their judgments are the same regardless of which report they’ve read. The grammar of a report only makes a difference to the judgments of people who have high RMA scores (RMA stands for ‘rape myth acceptance’. And before you ask, yes, gender does play a role here: men on average have higher RMA scores than women, so it’s mostly men who are susceptible.)

The next question is how grammar affects the perceptions of those subjects who are influenced by it. The answer isn’t clear cut: different studies have found different effects. The first group of researchers to do the experiment found what they’d predicted: subjects who read a passive-voice report judged perpetrators less responsible than those who read the active-voice version. But later studies found the opposite: subjects who were influenced by grammar judged perpetrators more responsible after reading a report in the passive.

This second pattern doesn’t fit with the theory that the passive downplays men’s agency and shields them from blame. To explain why it’s been found in some studies, we need to consider what else you can do with passive sentences.

One researcher who has thought about this is Tamar Holoshitz. She conducted one of the experiments which found that passive reports prompted higher ratings of perpetrator responsibility; she also analysed the language used by prosecutors in domestic violence cases, where she noticed that they often referred to the same act or event using both active and passive sentences. For instance:

The defendant gave her a single blow to the left eye

She was admitted [to hospital] after being hit in the eye, suffering from trauma and an orbital fracture

These two sentences are designed to do different things. The first directs attention to the perpetrator and describes what he did. The second directs attention to the victim and describes the consequences of the assault for her. The active sentence names the agent; the passive sentence names the harm.

Holoshitz argues that prosecutors use both these strategies to maximize their chances of getting a conviction. The first is necessary (to convict a defendant you have to show that he committed the crime he is on trial for), but prosecutors know that on its own it may not be sufficient. On any jury there are likely to be people who think violence against women is acceptable under some conditions (if it was ‘just a slap’, if she was ‘asking for it’, if he just lost control and lashed out without really meaning to, etc.). If you want jurors who think like this to return a guilty verdict, you need to address their belief that some degree of force is acceptable. Naming the agent doesn’t do that (they’re not disputing the claim that he punched her), but naming the harm–presenting an account that emphasizes the effect of his violence on the victim–gives you some chance of blocking the standard excuses (‘this wasn’t just a slap. He put her in hospital. You don’t break someone’s bones without meaning to hurt them’). Holoshitz thinks it’s this emphasis on harm that her experimental subjects were responding to when they attributed more responsibility to perpetrators after reading reports in the passive.

What all this boils down to is that passives can serve more than one purpose. The prosecutors in Holoshitz’s study used the passive strategically to highlight the effects of domestic violence on women; the defence lawyers in Susan Ehrlich’s research used it strategically to downplay the agency of their clients. They used the same grammatical construction, but in different ways to suit their different aims.

What matters for feminist purposes is the aims: we can criticize particular uses of the passive without suggesting it should never be used at all. If we do that, we won’t just catch the cases where it works against the interests of women, we’ll also catch the cases where it can work in women’s favour. Language is a resource; let’s not make it into a straitjacket.

Thanks to Tamar Holoshitz for allowing me to make use of her unpublished thesis ‘More than Words: Passive Voice Use in Courtroom Depictions of Violence Against Women’ (Harvard University, 2010).

Girls called Jack and boys named Sue

It’s official: the most popular British girls’ names of 2014 were Amelia, Olivia, Isla, Emily, Ava, Poppy, Isabella, Jessica, Lily and Sophie. For boys, the top ten names were Oliver, Jack, Harry, Jacob, Charlie, Thomas, George, Oscar, James and William.

The release of the annual lists earlier this week prompted the usual rash of media articles dissecting their significance. Preoccupations included the royal baby effect (‘George’ is on the rise, will ‘Charlotte’ trend next year?), the influence of celebrities (‘Harper’, the name of Victoria and David Beckham’s daughter, has entered the top 100), and of course, no report on British baby names would be complete without a paragraph on the position occupied by ‘Muhammad’.

But one thing that did not attract comment (it never does: it’s so taken for granted that it literally goes without saying) was the sharp gender differentiation to which the two lists bear witness. Of all the social attributes personal names may communicate information about–age, class, ethnicity, gender, religious faith–gender is the one that is communicated most consistently and most reliably. There is far more overlap between Black and white children’s names, and between the names given to children of different social classes, than there is between girls’ and boys’ names. In this year’s top 100 lists there was no overlap at all.

‘Androgynous’ names, which may be given to both boys and girls, do exist (current examples include ‘Cameron’ and ‘Tyler’), but they are marginal. A study which tracked their use in the US state of Illinois between 1916 and 1995 found that they never accounted for more than about 2% of all names. One reason for this was their instability: over time they tend to lose their androgynous quality. In the early 20th century ‘Dana’, ‘Marion’, Stacy’ and ‘Tracy’ were all androgynous; but as they became more popular with the parents of daughters, they fell out of favour with the parents of sons. As a result, they have all become girls’ names. There are no examples of a name moving in the other direction, and this reflects the basic feminist insight that gender isn’t just a difference, it’s a hierarchy. As the researchers explain,

there are issues of contamination such that the advantaged have a greater incentive to avoid having their status confused with the disadvantaged. … There is more to be lost for the advantaged and more to be gained by the disadvantaged when customary markers disappear.

Which is why you’re a lot more likely to meet a girl called Jack than a boy named Sue.

Another headline finding from research on gender and English personal names is that girls’ names show more variation than boys’, and the most popular girls’ names change more rapidly. This too is an effect of the status differential. In the past, one reason for the conservatism of boys’ names was that many boys, and far fewer girls, were named after a relative: men were seen as the carriers of a family’s history, its given names as well as its surname.I don’t know if that’s as true today, but it’s still the case that girls’ names, like their clothes, are more likely than boys’ to be selected for their fashionable or decorative qualities.

The names that appear in this year’s top ten lists for girls and boys are differentiated by some of their linguistic characteristics. For starters, the male names tend to be shorter. Both lists contain five two-syllable names, but the boys’ list also includes three monosyllables, whereas the girls’ list does not include any. Three of the girls’ names have four syllables, whereas the boys max out at three.

Another difference some commentators have pointed out is that the girls’ names are more ‘vowelly’ whereas the boys names are more ‘consonanty’. In part that’s a consequence of the point just made about length. A syllable in English has to contain a vowel (or occasionally a consonant with some vowel-like qualities), which may or may not be preceded and/or followed by one or more consonants. It follows that a name containing more syllables will also contain more vowels, and it will probably also have a higher vowel-to-consonant ratio.

But there’s one form of voweliness which is strongly associated with girls’ names, and doesn’t just reflect their tendency to be longer. Many–including all this year’s top ten–end with an unstressed syllable whose final sound is either –a (in English usually pronounced with the ‘colourless’ sound known as ‘schwa’) or –ie.  By contrast, six of the top ten boys’ names end in consonants (or eight, if you speak an r-pronouncing dialect of English).

There are some less obvious differences too. A phonetician colleague of mine* pointed out that the girls’ names are heavy on l-sounds and labials (consonants made with the lips, like p, m andv), whereas the boys’ names are heavy on coronals (made with the front part of the tongue, like s, t and ch). With vowels, the girls’ names tend to contain more high and front ones (like the ee sound in ‘Amelia’, the i in ‘Isabella’ and the e in ‘Emily’ and ‘Jessica’) whereas the boys’ name vowels tend to be lower and backer (like the a in ‘Harry’/’Jack’ or the o in ‘Oliver’/’Thomas’/’Oscar’).

We might wonder if these patterns are examples of the kind of sound symbolism that produces what’s known as the ‘kiki/bouba’ effect, after an experiment where people are given two different-shaped figures, one sharp and spiky, the other round and curvy, and asked which one should be called ‘kiki’ and which ‘bouba’. (The great majority choose ‘kiki’ for the spiky one and ‘bouba’ for the curvy one.) Maybe there’s some quasi-natural association between, say, femininity and high front vowels, and masculinity and low back vowels. Or maybe what matters isn’t the actual quality of the sounds so much as the contrast—one set of sounds occurring more frequently in male names and another set in female names.

I don’t want to rule out sound symbolism entirely, but for various reasons I don’t think it’s the main thing that’s going on here. In many cases the most plausible explanation has more to do with a combination of grammar and cultural history.

Many English female names were either borrowed from or modelled on languages in which –a is a grammatically feminine ending, like Latin and its descendants Italian and Spanish (from which we get ‘Amelia’ and ‘Isabella’). Some female names are derived from male ones by the addition of a feminine suffix, and those suffixes may also end in –a (e.g. –ina, -etta and -ella). A subset of the -ie names come from French, where –ie replaced the original –a on names like ‘Julie’ (from Latin ‘Julia’) and ‘Sophie’ (from Greek ‘Sophia’).  French was a prestige language in England from the late middle ages to the 19th century, and as such was the source of many high-class and fashionable names.

Another important source was the Bible, from which we get a cluster of originally Hebrew names which end (when pronounced in English) with the same schwa sound as the Latin/Spanish/Italian ones: they include ‘Deborah’, ‘Rebecca’, ‘Hannah’ and ‘Sarah’ (and a couple of boys’ names, ‘Joshua’ and ‘Noah’).

Collectively, these various imports have led English speakers to associate the –a ending with female names, and indeed to use it in names which are not imports but English inventions. ‘Olivia’ and ‘Jessica’ are examples: they may look Latin or Italian, but both were first used by Shakespeare.

Other –ie (or sometimes –y) names result from the use of that ending to form diminutive or ‘pet’ versions of names, like ‘Debbie’ for ‘Deborah’. (It’s also used in babytalk, as in ‘doggie’ and ‘kitty’.) This diminutive –ie/y form is not confined to girls’ names: it features in several of the most currently popular boys’ names, including ‘Harry’ and ‘Charlie’ from the top ten and ‘Alfie’, ‘Archie’ and ‘Freddie’ from the top twenty. But in the past it was more commonly used for girls. It’s also more common for girls to go on using an –ie diminutive in adulthood. Boys who as children were called ‘Tommy’ or ‘Timmy’ often substitute ‘Tom’ and ‘Tim’ when they reach the stage of finding the –y version childish and perhaps (not unrelatedly) a bit girly.

The popularity of monosyllabic male nicknames (‘Will’, ‘Bob’, ‘Joe’, ‘Jim’, ‘Frank’, ‘Dave’, ‘Steve’ et al) may be an example of a crude kind of sound-symbolism: monosyllables suggest a strong, no-nonsense stance which is opposed to feminine frilliness. Some women exploit this too, rejecting ‘girly’ diminutives like ‘Katie’ and ‘Cathy’ in favour of the monosyllables ‘Kate’ and ‘Cath’.

Wherever the associations come from, research suggests that English-speakers do attribute gendered meanings to certain sound patterns (and also spelling patterns) in personal names. An ingenious study of this phenomenon was done in the 1990s, making use of the African American tradition of giving children unique names. The researchers selected 16 names which had only ever been given to one child, and asked an ethnically mixed sample of people recruited at a shopping mall to say whether they thought the child was male or female. They wanted to know whether a cross-section of Americans could deduce this from the sound and spelling of names which they had never encountered before. It turned out that most people could. Their responses showed a high level of agreement, and in 13 out of 16 cases what they agreed on were the correct answers.

Almost everyone guessed, for instance, that ‘Lamecca’ (three syllables, ends in –a, contains a liquid and a labial) was a girl, while ‘Gerais’ (two syllables, beginning and ending with a coronal) was a boy. In these cases people may have been helped by the partial resemblance of the names to familiar ones like ‘Rebecca’ and ‘Gerald’, but they also did well with more unusual inventions like ‘Olukayod’. Although its length might suggest femaleness, the back vowels and, especially, the final d-sound cued the correct, male interpretation. (Women’s names ending in –d, like ‘Gertrude’ and ‘Winifred’, have fallen out of fashion and are now much rarer than male examples like ‘David’ and ‘Todd’.)

The incorrect answers were also instructive. While most people correctly identified ‘Jorell’ as a male name, ‘Furelle’, also in fact a boy’s name, was consistently misidentified as female—presumably because of the spelling ‘elle’, which is familiar from French feminine forms like ‘Danielle’ and ‘Michelle’. Another name that most people wrongly judged female was ‘Chanti’. The researchers speculated that they interpreted the initial ‘Ch’ as a sh-sound, and associated that with names like ‘Charlotte’ and ‘Cher’, while the –i ending was reminiscent of ‘Heidi’ and ‘Lori’.

This study shows that even unique names are not invented without reference to pre-existing conventions. Their creators are guided both by the overarching convention that names should mark gender in some way, and by the more specific conventions that define what is gender-appropriate in terms of sound, spelling, structure and sometimes meaning (we don’t generally call boys by flower names like ‘Poppy’ and ‘Lily’, for instance).

But in this domain as in others, gender norms are contested rather than monolithic, and there are differences between social groups. There’s evidence, for instance, that mothers with college-level education tend to resist giving their daughters stereotypically feminine names, and their search for alternatives can sometimes set wider trends.

A case in point is the recent popularity of the names ‘Erin’, ‘Lauren’ and ‘Megan’. According to researchers who have examined this trend, the educated mothers who were the first to adopt these (at the time, uncommon) names were drawn to them because of a desire to steer a course between the extremes of hyper-femininity and androgyny. The –n names marked gender clearly, but in an understated way. Ending in a consonant meant they didn’t have the ‘frilly’ feminine connotations of –ie and –a , but nor did they have the ‘tough’ masculine associations of plosives like the –k in ‘Jack’. There was also a historical precedent in feminine –ine names like ‘Christine’ and ‘Caroline’, which meant that although the new names were distinctive, they were not so different as to seem weird.

It’s been suggested that today’s most popular girls’ names may answer a similar need for girls’ names that are neither excessively feminine nor aggressively unfeminine. Pondering the rise of ‘Emily’, ‘Isabelle’ and ‘Amelia’, Pamela Haag comments:

These are all lovely, pretty names—earnest, formal, dignified, and strong. They’re also palpably old-fashioned if not anachronistic. They convey strength with tradition; independence with convention; spunkiness with formal propriety; and rebelliousness, but with a softening, antique patina.

Haag thinks the vogue for these old-fashioned names is a sign of our conservative times. The key point is that they are conventionally feminine without being too frivolously girly: pretty, but also ‘dignified and strong’. This may appeal to women who are feminist up to a point, but do not have the revolutionary ambitions of earlier generations; mothers who, as Haag puts it,

want girl power for their daughters, but they want girl power that is softer, and not so socially objectionable or polarizing.

While writing this post I read a number of pieces addressing the question of what feminists should call their children. Most suggested naming girls after pioneering feminists and other Great Women of History. This approach does throw up one or two names which would be unusual and daring choices (like ‘Boudicca’ and ‘Sojourner’), but mostly it just recycles the conventionally feminine names which women were given in the past. None of the writers suggested inventing new feminist names (one piece was headed ‘18 feminist names you can give your kid without naming them Katniss’), and no one questioned the basic assumption that children should have clearly gendered, distinctively masculine or feminine names.

Of course there’s nothing wrong with commemorating our herstory and honouring our feminist foremothers by passing on their names to a new generation. I’m not suggesting that our goal should be to replace ‘Amelia’ with ‘Katniss’ (though ‘Katniss’ might yet make an appearance: last year 244 British girls were named ‘Arya’, and nine were called ‘Daenerys’). And I’m certainly not in the business of telling anyone what they should or shouldn’t call their child. But I do find it interesting that current patterns of gender differentiation in English personal names are so similar to those reported in research using data that goes back a hundred years. The specifics of our naming choices may be susceptible to fads and fashions, but the underlying principles seem remarkably resistant to change.

*Thanks to Elinor Payne (I’ve simplified some things in the interests of accessibility to non-specialists: that’s my responsibility, not hers).