"We need transparency."
It's a No
Among the latter group is Springer Nature, arguably the world's foremost scientific journal publisher. Speaking to The Verge, the world's largest scientific publishing house announced a decision to outlaw listing ChatGPT and other Large Language Models (LLMs) as coauthors on scientific studies — a question that the scientific community has been locking horns over for weeks now.
"We felt compelled to clarify our position: for our authors, for our editors, and for ourselves," Magdalena Skipper, editor-in-chief of Springer Nature's Nature, told the Verge.
"This new generation of LLMs tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them," she continued, "but [also] using them in ways that go beyond how they can genuinely be used at present."
Importantly, the publisher isn't outlawing LLMs entirely. As long as they probably disclose LLM use, scientists are still allowed to use ChatGPT and similar programs as assistive writing and research tools. They just aren't allowed to give the machine "researcher" status by listing it as a co-author.
"Our policy is quite clear on this: we don't prohibit their use as a tool in writing a paper," Skipper tells the Verge. "What's fundamental is that there is clarity. About how a paper is put together and what [software] is used."
"We need transparency," she added, "as that lies at the very heart of how science should be done and communicated."
We can't argue with that, although it's worth noting that the ethics of incorporating ChatGPT and similar tools into scientific research isn't as simple as making sure the bot is properly credited. These tools are often sneakily wrong, sometimes providing incomplete or flat-out bullshit answers without sources or in-platform fact-checking. And speaking of sources, text-generators have also drawn wide criticism for clear and present plagiarism, which, unlike regular ol' pre-AI copying, can't be reliably caught with plagiarism-detecting programs.
And yet, some arguments for ChatGPT's use in the field are quite compelling, particularly as an assistive English tool for researchers who don't speak English as a first language.
In any case, it's complicated. And right now, there's no good answer.
"I think we can safely say," Skipper continued, "that outright bans of anything don't work."
Share This Article