laitimes

Nature's new rules: Writing papers with ChatGPT is OK, listing as an author is not

James from the Temple of Concave

Qubits | Official account QbitAI

Faced with ChatGPT, Nature finally couldn't sit still.

This week, the authoritative academic publisher came down and gave a series of qualitative questions such as ChatGPT ghostwriting academic research articles and being listed as an author.

Specifically, Nature lists two principles:

(1) Any large-scale language model tool (such as ChatGPT) cannot be a paper author;

(2) If relevant tools have been used in the creation of the paper, the author should clearly indicate it in the "method" or "acknowledgements" or appropriate part.

These requirements have now been added to the Author Submission Guidelines.

Recently, ChatGPT has been intruding more and more in the academic research circle.

In a paper on rapamycin's anti-aging application in December last year, ChatGPT was listed as a work, causing controversy in the industry. In addition to this article, there are many studies that list ChatGPT as the author.

Nature has also taken notice, with a survey conducted in December showing that 20 percent of 293 professors and teachers surveyed had noticed or witnessed students using ChatGPT to complete assignments or papers, and many more expressed concern.

This time, Nature's statement is precisely to characterize the controversy.

Several papers have listed ChatGPT as an author

Launched by OpenAI in late November last year, ChatGPT quickly became the number one "monkey" at the end of the year and the beginning of the year because its performance greatly exceeded that of the previous Large Language Model (LLM).

In the fields of new media, film and television, software development, and game interaction, ChatGPT has been quickly used to assist production and improve efficiency.

The academic community is no exception.

According to Nature, there are at least 4 papers that use ChatGPT and list them as authors.

One of the preprints was published in December 2022 in the medical preprint database medRxiv. The paper looked at how ChatGPT performed on the U.S. medical licensure exam. Although the research was about ChatGPT, ChatGPT was also listed in the author column.

Another paper, published in the journal Nurse Education Practice, on the pros and cons of open AI platforms in nursing education, likewise, ChatGPT is included in the author column:

The third paper, from Insilico Medicine, an AI drug discovery company, on a new drug, rapamycin, was published in Oncoscience. Similarly, ChatGPT is listed as the author.

The fourth article, which is a little "ancient", was published in June 2022 and explores the theme of how capable AI is to generate papers. The AI included in the author's column is not ChatGPT, but GPT-3, which was released earlier.

Although the above research content is more or less related to the generation of language models, for serious scientific research activities, the use of "research objects" to write papers and include them in the author column inevitably leads to controversy and questioning.

And even if AI is not listed as an author, the use of ChatGPT in academic circles is becoming more and more common.

Alex Zhavoronkov, CEO of AI drug discovery company Insilico Medicine, revealed that his organization has published more than 80 papers generated by AI tools.

British professor Mike Sharples has been concerned about the impact of generative AI on academic research. Not long ago, he personally demonstrated on Twitter how to use a large language model to generate an academic paper in 10 minutes, and described step by step how he used AI to generate abstracts based on titles, which also caused a lot of discussion.

It is more common for students to use ChatGPT to help write essays, generate code, and complete assignments, and the tool is now banned from some regional educational institutions in the United States.

Just last week, a Northern Michigan University student relied on ChatGPT to write the highest score essay in his class.

Interestingly, the student was caught because the submitted essay was so logically coherent and so well structured that the instructor specifically questioned him to learn the truth.

△ Northern Michigan University

To be clear, the above packet grabbing is only accidental, and more often, the generated content has made it difficult for scientists to distinguish between real and fake.

A previous Nature article pointed out that after the release of ChatGPT, a team of researchers at Northwestern University in Illinois was exploring whether scientists could identify it by using the AI tool to generate medical paper abstracts.

It was found that all AI-generated digests passed the plagiarism detector, and in human review, 32% of AI-generated summaries were considered to be done by real humans.

Identification tools are already under development

This Nature's end shows that they take the problems caused by ChatGPT seriously.

In the latest release, Nature said that the academic community is concerned that students and researchers may use content generated by large language models as their own text, and in addition to risking using it, the above process will also produce unreliable research conclusions.

Especially for ChatGPT, the Nature team and many publishers and platforms believe that the tool cannot be responsible for the integrity of scientific papers and the content itself.

One side evidence is that the technology media CNET exposed in the past two days that 41 of the 77 pieces of content compiled by the platform's AI have errors, and at present, the platform has corrected it and said that it will suspend the production of content in this way.

It is based on the above concerns that Nature has introduced relevant regulations.

As for how to tell the difference between AI-generated content?

Nature says that the current raw output of ChatGPT can be found by closer inspection, especially when it comes to specific scientific work, and the content may contain the simplest errors and general and tedious wording. In addition, Nature said that they are also developing related identification technology.

It is worth mentioning that relevant identification tools have also been developed.

For example, OpenAI's own GPT-2 Output Detector can enter more than 50 characters (tokens) to more accurately identify AI-generated text.

Another example is Princeton college student Edward Tian, who also made a related tool GPTZero.

Not all university professors and teachers have a negative attitude toward AI-generated tools, though.

Ethan Mollick, a professor at Penn's Wharton School, for example, instead asks students to use ChatGPT for classwork, arguing that he is embracing emerging technology tools.

A worker at the paper-publishing platform medRxiv also said that ChatGPT is not a new problem.

He believes that previously, researchers have tried to secretly add pets and fictional character names. Therefore, they believe that the core of the problem is that there is a need to continue to strengthen the inspection.

Finally, do you use ChatGPT in your scientific work?

Reference Links:

[1]https://www.nature.com/articles/d41586-023-00191-1

[2]https://www.nature.com/articles/d41586-023-00056-7

[3]https://www.nature.com/articles/d41586-023-00204-z

[4]https://www.theverge.com/2023/1/26/23570967/chatgpt-author-scientific-papers-springer-nature-ban

[5]https://www.businessinsider.com/wharton-mba-professor-requires-students-to-use-chatgpt-ai-cheating-2023-1

Read on