OpenAI's viral text generator ChatGPT has made some serious waves over the last couple of months, offering the public access to a chatbot that's arguably a vast improvement over its numerous and deeply flawed predecessors.
In fact, one group of researchers is now so confident in its capabilities that they've included it as a coauthor in a scientific paper, marking yet another inflection point in the rise of AI chatbots and their widespread use.
A not-yet-peer-reviewed paper on ChatGPT's ability to pass the United States Medical Licensing Exam (USMLE) lists 11 researchers affiliated with the healthcare startup Ansible Health — and ChatGPT itself, raising eyebrows amongst experts.
"Adding ChatGPT as an author was definitely an intentional move, and one that we did spend some time thinking through," Jack Po, CEO of Ansible Health, told Futurism.
The move sparked a debate online about AI chatbots playing an active role in current scientific research, despite often being unable to distinguish between truth and fiction.
Some users on social media called the move "deeply stupid," while others lamented the end of an era.
The Ansible Health paper is part of a greater trend. In a report this week, Nature found several more examples of scientists listing ChatGPT as an author, with at least one being chalked up to human error.
The move has publishers scrambling to adjust to a new reality in which chatbots are actively contributing to scientific research — to various degrees, that is.
Leadership at the repository bioRxiv, which published Ansible Health's preprint back in December, told Nature that they're still debating the pros and cons of allowing ChatGPT to be listed as an author.
"We need to distinguish the formal role of an author of a scholarly manuscript from the more general notion of an author as the writer of a document," bioRxiv co-founder Richard Sever told the publication.
Po, however, who wasn't listed as an author himself but copied senior author Victor Tseng on emails to Futurism, defended his academic peers' decision to include ChatGPT as an author.
"The reason why we listed it as an author was because we believe it actually contributed intellectually to the content of the paper and not just as a subject for its evaluation," he told us, "just like how we wouldn't normally include human subjects/patients as authors, unless they contributed to the design/evaluation of the study itself, as well as the writing of the paper."
Po also argued that ChatGPT didn't provide "the predominant scientific rigor and intellectual contributions."
"Rather, we are saying that it contributed similarly to how we would typically expect a middle author to contribute," he explained, expressing how he was taken aback by "some of the reactions online at the moment."
Po went as far as to argue that he would be "shocked" if ChatGPT and other large language models (LLMs) out there "isn't used in literally every single paper (and knowledge work) in the near future."
But seeing AI chatbots as "authors" still isn't sitting well with publishers.
"An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs," Nature editor-in-chief Magdalena Skipper told Nature's news arm.
"We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism," Science editor-in-chief Holden Thorp added.
For his part, Po doesn't understand what all the fuss is about.
"I think some of this debate is missing the point and just shows how much angst there is from knowledge workers who are now under (some might argue existential) threat," Po told Futurism, arguing that generative adversarial networks, machine learning frameworks capable of producing entirely new and photorealistic images, have already been around for a decade producing novel input data and making contributions to scientific papers.
The debate over ChatGPT being included as an author on scientific papers is symptomatic of a considerable push forward for AI-powered tools and the resulting reactions.
Do these responses amount to kneejerk reactions — or are they legitimate qualms over algorithms meddling with the affairs of human scientists?
The debate is likely only getting started, and as Nature notes, several papers are set to be published crediting ChatGPT as coauthor in the near future.
But if there's one thing that both Po and scientific publications can agree on, it's the fact that the AI chatbot's feedback will need to be taken with a massive grain of salt.
After all, its knowledge base is only so good as the data it was originally trained on.
READ MORE: ChatGPT listed as author on research papers: many scientists disapprove [Nature]
More on ChatGPT: College Student Caught Submitting Paper Using ChatGPT
Share This Article