There is certainly cause for concern when it comes to using AI in the pursuit of science. For instance, earlier this year, we witnessed the viral sensation of several egregiously bad AI-generated figures published in a peer-reviewed article in Frontiers, a reputable scientific journal. Scientists on social media expressed equal parts shock and ridicule at the images, one of which featured a rat with grotesquely large and bizarre genitals. The paper has since been retracted, but the incident reinforces a growing concern that AI will make published scientific research less trustworthy, even as it increases productivity.
That said, there are also some useful applications of AI in the scientific endeavor. For instance, back in January, the research publisher Science announced that all of its journals would begin using commercial software that automates the process of detecting improperly manipulated images. Perhaps that would have caught the egregious rat genitalia figure, although as Ars Science Editor John Timmer pointed out at the time, the software has limitations. “While it will catch some of the most egregious cases of image manipulation, enterprising fraudsters can easily avoid being caught if they know how the software operates,” he wrote.
Hawks acknowledged on his blog that the use of AI by scientists and scientific journals is likely inevitable and even recognizes the potential benefits. “I don’t think this is a dystopian future. But not all uses of machine learning are equal,” he wrote. To wit:
[I]t’s bad for anyone to use AI to reduce or replace the scientific input and oversight of people in research—whether that input comes from researchers, editors, reviewers, or readers. It’s stupid for a company to use AI to divert experts’ effort into redundant rounds of proofreading, or to make disseminating scientific work more difficult.
In this case, Elsevier may have been aiming for good but instead hit the exacta of bad and stupid. It’s especially galling that they demand transparency from authors but do not provide transparency about their own processes… [I]t would be a very good idea for authors of recent articles to make sure that they have posted a preprint somewhere, so that their original pre-AI version will be available for readers. As the editors lose access, corrections to published articles may become difficult or impossible.
Nature published an article back in March raising questions about the efficacy of mass resignations as an emerging form of protest after all the editors of the Wiley-published linguistics journal Syntax resigned in February. (Several of their concerns mirror those of the JHE editorial board.) Such moves certainly garner attention, but even former Syntax editor Klaus Abels of University College London told Nature that the objective of such mass resignations should be on moving beyond mere protest, focusing instead on establishing new independent nonprofit journals for the academic community that are open access and have high academic standards.