As discussed by Jaime A Teixeira da Silva in an article in September’s ‘Science Editor’ (https://www.csescienceeditor.org/article/detection-and-elimination-of-tortured-phrases/), it’s no secret that academic publishers are gradually cutting copyediting resources. However, they are doing so at their peril, given the growing ubiquity of manuscripts with AI-generated text, either from non-native English-speaking authors needing help with their writing, or from less scrupulous authors trying to go under the radar of plagiarism software. A copyeditor would flag such nonsense before going to press, and save all that costly and damaging retraction and correction malarkey.

Sadly, no sudden reversal in the employment rate of copyeditors will be occurring any time soon; so, what can be done to support skeletal human quality control resources? Weirdly, the solution could be AI – or more specifically, large language models  – which, early evidence has shown, can detect gibberish phrases and substitute correct terminology in scientific publications.

Is it the death knell for copyeditors, then? Jaime would rather it didn’t entirely; as with all things, a balance is needed. I would argue that experienced human copyeditors have a better understanding of context, can look beyond the words to track the ‘knock-on’ effects of errors and ambiguities, and are already well trained. It doesn’t take much imagination to see that this AI snake may eventually eat itself. But then again, imagination never got in the way of a good cost-cutting measure.