Editor Predator
A stranger wants to publish my work. She doesn't exist.
I wrote an article about an imaginary city in the Middle East, and an academic journal accepted it for publication. Researching and writing the article took many dogged hours. The journal ran it through the rigorous checks of blind peer review; the blind peers flagged concerns and recommended adjustments, which I dutifully made. After weeks of back-and-forth the article was ready to venture into the world. Publication day was a significant milestone in my life, but the significance was lost on everyone else. Since my article appeared, back in midwinter, it’s had a smattering of downloads and sweet naff all in the way of citations.
That’s how I planned to begin this Substack post, but it doesn’t represent the whole truth – and since the post is going to be about the genuine and the fake, and their rapid, ongoing, double-helical entwinement, I think I should at least begin with the whole truth. My piece about the imaginary Middle Eastern city wasn’t a work of scholarship; it was a short story. There was no gruelling editorial back-and-forth, and no monastic confinement to the archives in the production. I dreamed the story up in the summer of 2022, as I walked to work and back again. I wrote it over a weekend, most of which I spent in my pyjamas, and then polished it obsessively for about twenty-six months until I submitted it, on a whim, last year.
All the other parts are true. The publication is an academic journal: it concentrates on scholarly discussions on the pedagogy of creative writing, but carries a few short stories in each volume. And it was a significant milestone in my life: I’ve published a reasonable amount of academic work, but always dreamed of publishing fiction. And the significance was lost on everyone else, if significance is measured in citations.
In February an email landed in my inbox with my short story’s title as the subject. My stomach swooped: at last, engagement! I desperately wanted my story to be liked, and desperately feared that I had offended or disappointed someone.
After reading your paper, said the sender of the email, I was genuinely impressed by your insightful conclusions. A London-based journal, continued the sender, was placing me under consideration for the status of Invited Author. The journal specialised in the humanities and social sciences (two out of three, then, of the macroscopic divisions of all intellectual enquiry; not bad). It was renowned for its high standards, the sender claimed, as well as the rigorous quality of its peer-review, and its adherence to top specifications.
The sender introduced herself: she was the journal’s managing editor. Her name had an alluring euphony: the given name was the beautiful Hebrew word used in the Bible to mean pause or rest, but her surname – which she shares with a pommel-horse-specialising artistic gymnast from Hertfordshire – sounded fantastically English. Under her signature she provided her headshot, cropped in the circular LinkedIn fashion. She was about to attend a scientific meeting, the editor explained, in Barcelona – so would I kindly provide my WhatsApp details, to facilitate seamless communication?
The way it works is this: suspending their suspicion, the precarious young scholar hungry for professional advancement replies enthusiastically, eager to take up the offer. They submit another article, perhaps the article they’ve been yearning to write for a few years, sensing in the heft and status of Invited Author the chance to publish something daring and contentious, something from the heart. Then, at the last minute, the cheery request comes through from whatever pretend journal they’re dealing with: the small matter – but not so small, to the precarious young scholar – of the publication fee.
Predatory journals have been around for decades; I’ve published enough in my academic career, just, to draw their attention. But what felt new about this particular request was that the editor doesn’t exist. Her name, her picture, and all her words: everything was generated by a chatbot.
All of us are constantly fending off scams, in our various walks of life. Scamming is a much more venerable human practice than sending and receiving emails – older even than the institution of the university. That predatory publications should have started to harness the bullshitting capacities of generative AI is no surprise.
What’s strange about this particular scam, though, is that in part it’s founded on reality. My would-be editor might be a façade, but the journal is real in more than the technical sense: it has the accreditation it claims to have, rubbing shoulders in the paywalled scholarly databases with the most prestigious titles in the humanities. The authors of its articles are real too, and though they’re mostly like me – untenured junior academics, treading water and trying to stay afloat – there’s also some established names: inexplicably, a former high-ranking UN official recently published his reflections on the state of the world.
We spend a lot of time preparing for a fully artificial world: that uncanny future which seems (for better or worse) to lie just around the corner, where books and authors, lovers and tutors, have been rendered wholly robotic. But we do this at risk of neglecting the world that already exists, and will go on existing – a world characterised by weird, monstrous hybrids of the fake and the real. The world in which a pirate journal employs real academics for the grunt work, but generates a fake one for the critical task of entrapment; the world in which every professional email is a co-production of human prompt and chatbot execution. That’s the near future. There’s nothing genuinely new about it, it’s pretty tedious and bathetic, and it’s already begun.
But as the sphere of human reality is colonised by the fakery of the robotic, we should do more than just lament the corruption of the real by the fake. We should also ask why this colonisation has been allowed to take place – ask what has happened to human reality, that is, to make its monstrous entwinement with fakery so possible. The answer, I think, is something like ‘industrialisation’.
The idea of the university is an ancient one, but the idea of a modern university – professionalised, accessible, and subject to cross-institutional standards and regulations – is surprisingly recent. Despite starry-eyed nostalgia for the era of gentlemanly scholarship, and the recovering academics who mutter darkly about the horrors they endured, the modern university is a good idea. But the modern university has a dark side, and the dark side is bureaucratisation.
Gone are the days when academic life was completely informal: where professional advancement necessitated a knock on the door, a friendly chat, a favour repaid or a connection deployed. And good riddance. Analysts from eastern Europe, less swaddled from historical consciousness than we are in the complacent west, remind us that informality is the essence of corruption. In the post-gentlemanly university, the post-corrupt society, it no longer matters who you are; it matters what you do – the quality of your teaching and research.
But this begs the question: how can we determine that quality, in a way that balances the fairness of professional standards with the subtle flexibility of something personable? Over time, the creeping bureaucratisation of the university has made that balance impossible; the dial has moved ever further from ‘personable’, and ever closer to ‘professionalisation’. But the notion of ‘professionalisation’ has, as a consequence, grown narrow – so narrow, in fact, that it no longer has much to do with fairness.
In lean times, this means that academia becomes a competitive frenzy, a throng of juniors competing for a handful of senior positions. ‘Publish or perish’ means churning out research at remorseless pace; it makes for a style which is brassy and confrontational (‘EVERYONE who has written about this before ME is WRONG’) and at the same time safe and unoriginal (‘in conclusion, we MUST… think harder about this!’). Swept up in a torrent of its own research, and desperately trying to keep things regular, the academy jettisons quality and starts obsessively quantifying: an article’s worth is determined not by its high estimation in the judgment of a small group of informed peers, but by how the article performs – as if endowed with its own life – in the giant grid, the extra-institutional database. Demanding absolute objectivity, the grid determines an article’s merit by numbers of views, downloads and citations.
As a consequence, research in the humanities begins to disguise itself, assuming the methodical regularity and formal impersonality of science. Science is a towering miracle, the greatest intellectual achievement of recent centuries. But science is not designed to answer every question, and the non-scientific disciplines suffer when they have to put on scientific clothes.
My short story fits especially awkwardly into the quasi-scientific mould. There’s a debate about the virtues and drawbacks of fiction-writing becoming an academic discipline, absorbed into the structures of the university, but I think it’s a red herring. The more interesting incongruity is of all pieces of humanistic writing – fictional, scholarly and everything in between – in their natural essayistic form, their personal idiosyncrasy, with the stiff data-secreting shells into which they are forced. Living form becomes prefabricated format. The seeds of the havoc currently being wrought in academia by AI – on the supply side, predatory journals; on the demand side, LLM-generated essays and other forms of cheating – weren’t sown with the rollout of Chat-GPT. They were sown when the profession began pseudo-scientifically to quantify everything, and neglect all other forms of value: when it industrialised.
I can’t say for certain how the journal in question makes its money. If I’d have followed the LLM-generated editor’s lead, messaged ‘her’ while ‘she’ attended her ‘scientific meeting’ in ‘Barcelona’ – and every third word a lie, duer paid to the hearer than the Turk’s tribute – if I had struck up a relationship, submitted work, maybe it would have ended in me transferring a sum, and then perhaps being scammed out of greater sums. Maybe they didn’t want my money, though; maybe they’re just farming me for content, exploiting a junior scholar’s frantic impulse to publish, padding out their ersatz journal with the clout of something real, so that libraries and scholarly databases will buy it up.
It’s easy to cordon off this pirate journal and its practices from the established ones, the ones in which my contemporaries and I dream of being published. The established journals don’t charge fees. But neither do they make you feel particularly good about yourself. One grand old journal has been hoarding an article of mine, the way a lazy spider hoards a fly, for three years; sometimes I enquire about its progress, and feel momentarily what it must have been like to engage with a government official in the Soviet Union. They have no time for people like me; they are busy greasing the wheels of the machine.
There’s a fantasy out there, originating in the postwar years, of academia as a particularly wholesome kind of life: a balanced existence of fulfilling work and quaint comforts, insulated from the hard ways of the world. It reassures people, this fantasy, even when they live very far from it, even when they privately scorn the academic life as beneath them. But the fantasy is well past its expiry date. Academia has become a path as demanding of sharp elbows as any other. You may not pay the proper journals to be published, but you spend large chunks of the postdoc years working for free.
My short story’s a tale of two cities: the unnamed, semi-imaginary Middle Eastern one; and a western city, also unnamed, where the protagonist is a long-term patient at a blood clinic, from whose waiting room he watches news footage of the terrible civil war consuming the Middle Eastern city. He’s travelled there himself, but his memories are growing hazy. He remembers a search for an elusive sanctuary called the Tarrying Garden, but his reminiscence is overtaken by images of civil war, and the two cities – western and eastern, modern and medieval – start to merge. I was thinking in particular of Damascus, where in 2008 I spent a few days: unforgettable but at the same time, in light of the terrible long war that followed, increasingly remote.
By the looks of her AI-generated circular profile picture, my would-be editor comes from a city like Damascus. She’s a Levantine woman, with shoulder-length dark hair and piercing brown eyes, shrewd but warm. She’s dressed with modest elegance, in a blue shirt and grey cardigan. She could be a supporting character from Philip Pullman’s recent Book of Dust trilogy, perhaps descended from the people driven out of the ruined Madinat al-Qamar, the City of the Moon. Her expression communicates a characteristically Pullmanesque combination of humility and fearlessness. Though I honour the past, she seems to say, I am forward-thinking, progressive, courageous.
But in case we get scared of the powerful Levantine woman in the foreground, the AI-generated picture’s background is there to soothe us. In the background everything is venerable; everything is quaint, where ‘quaint’ is understood as synonymous with ‘English’. Leather-bound tomes line the shelves, charmingly irregular. The editor turns to us from her desk, where more leather-bound books are stacked in piles. In reality, rare books like these are kept systematically, handled with immense caution; in the picture, an earthenware coffee cup, coloured a tasteful terracotta, steams beside them. The editor has a sheaf of parchment in front of her and beside it, erect in its stand, a quill. We’re in Dumbledore’s office, but Dumbledore is a chatbot pretending to be a woman, pretending to be in Barcelona, and she wants your phone number.
Boiled down to a set of clichés, England and academia become the same thing: a parody of oldness, an antiquated patina to cast over contemporary hi-tech profiteering. Tradition is understood here not as something living, a great evolving chain that includes and inflects the present; tradition is simply the wallpaper, the fixtures and fittings. It’s cunning and calculated, of course, but it’s also highly reactionary: an aesthetic appeal to that which is old, not for what it can teach us, but only for its legitimising authority. An instance of the same right-wing camp that underscores the performative masculinity of the toxic influencers, and the cringeworthy aping of Englishness by so-called Anglofuturist podcasters.
The species of bullshit I’m most familiar with involves letting particular knowledge stand for general expertise. Some people have the knack of drawing on a single detail – a precise date, or an impressive fact, a line from Shakespeare or a neatly expressed aphorism about him – in a manner that constructs an instant aura of comprehensive knowledge. Look how effortlessly I retrieved that detail, they say – isn’t it logical to assume I know everything? This is the kind of bullshit of which the gentleman, and the Englishman, have historically been most fond. You hear it a lot in the House of Commons; you also hear it, shock horror, in universities.
Chatbot-generated bullshit is intriguingly different. Rather than the confidence trick of English bullshit, that single authoritative detail, chatbots base their bullshitting strategy on the bureaucratic tactic of saying nothing in many words. But unlike human bureaucrats – purveyors of institutional officialese designed to smokescreen the institution from public access – chatbots desperately want to talk to us. So rather than colourlessly stiff officialese, we have a weird language which is at once insistently personable and creepily devoid of specificity. I personally found your paper particularly compelling, my fake editor told me, because of its meticulous approach. In a convincing human sentence, the soft-focus set-up would snap into resolution with something concrete, an actual detail about my story. Instead we skate from one hazy generality (particularly compelling) to another (meticulous approach). Try as it might, this sentence can’t achieve the personable quality it’s striving for, the specificity that might seal the deal.
It’s very smooth, the way a scar is smooth, or the skin of a doll. The distaste of human dishonesty combined with the repulsiveness of machines pretending to be people. Of course, the chatbots will get better and better at bullshitting; undoubtedly, they’re already good enough to deceive lots of us, lots of the time. Whether chatbot-speak will ever be able to rid itself of this uncanny smoothness is another question.
A chatbot improves, in part, thanks to increases in computational power, and for such increases you need ingenious developments in engineering. But chatbots also improve by being fed more data. The moment when I knew for sure that my editor was fake is when I googled her full name, and found that the only match was in a work of fiction, a teenage romance written by an American woman. Somewhere, somehow, an LLM gained access to that author’s text, and now it spews it out, claiming it as its own. Most of the time, the words get jumbled up, rearranged to disguise the essential thievery. Acts of recombination are packaged and promoted as acts of miraculous creation. Here, the chatbot didn’t even bother with recombination – it just seized a human author’s intellectual property. When the AI accelerationists whine and bleat about regulation, this is frequently what they mean: our robots are hungry! Let them eat your words!
Everyone uses chatbots to cut corners, and everyone has to make a personal decision about how much fakery they can tolerate. Generating pro forma email responses is one thing; cheating on your essay might be another. Neither of these are remotely as dishonest or potentially harmful as inventing an identity and trying to scam precarious academics out of their money or time. When we use chatbots in clever corner-cutting ways, though, we instinctively think of ourselves as morally restrained versions of my editor predator: we’re being streetwise and cunning, maybe a little unromantic, but that’s the way of the world. Yet maybe this is the wrong way round. Though it’s possible to use a chatbot on our own terms, as a tool over which we exercise control, without endowing it with quasi-personal agency, it’s also fiendishly difficult, because the people selling us the chatbots want us to think otherwise. As soon as we start to endow the chatbot with life – whatever apparent gains it delivers us – we are perhaps not a benign miniature of the scammer, but a prize example of the scammed.


