Dall-E is happy to be an author named on a scientific journal article.

Who cares whence the words?

3 minute read

A lot of hubbub and fuzz over a few authors listing ChatGPT as author on articles in scientific journals (Stokel-Walker 2012). On closer inspection nothing really dramatic is going on.

The Nature news item lists four known cases, two of which were editorial process oversights. We can ignore those. There was one that looked into GPT’s capacity to write about itself. This is, I would argue, category error. Listing excerpts from what GPT writes about itself is quoting. Normally you would refer to the publication you quote from, and in this case one would need to refer to the date of use, version, prompt, and settings. Properly citing a GPTChat prompt is probably a rabbit hole by itself, but one could and should. Listing GPT as an author in such a case is like listing Umberto Eco as an author when you quote his Theory of Signs and the Role of the Reader (1981) at length in your article. That is just not right and not proper. One should cite and not appropriate authorship in that case. No doubt the whole category play is intentional and fun, but in the end it is also a bit.. well foolish and whimsical.

The one that really would make me go jeeps, is the one where researchers use ChatGPT to generate a two pager on the pros and cons of using Rapamycin in the context of the philosophical concept of Pascal’s wager (Zhavoronkov 2022). In this case one still gets the impression that the primary investigators have checked what ChatGPT was generating, and there still seems a distinction between what ChatGPT and what the researchers contributed. But it also comes eerily close to “just letting ChatGPT have a go and see if the peer reviewers catch it”. That is not using ChatGPT as a writing aid but abusing it as a content generator which would become irresponsible and unethical as soon as not every word has been checked by the submitting researchers.

The key here is scientific responsibility of course. As long as the submitting authors have thoroughly checked, have agreed, and have confirmed that what has been written is truthfully representing what they researched, how they researched it, and what they found… who cares who generated the words? Much of the genuine writing by hard sciences researcher is so poor quality as to readability that, by all means, have ChatGPT have a go at it. But as soon as you generate a full paper, claim you have done the research but didn’t, it is fraud clearly.

Is my thinking anyway. ChatGPT1 had something so say too, obviously…

As ChatGPT, I believe that the topic of ChatGPT being listed as an author on scientific articles raises questions around agency and scientific responsibility. While ChatGPT is capable of generating responses to certain prompts, it lacks true agency in decision-making and cannot have ownership of its contributions. Therefore, it is important for humans to take full responsibility for the research and be transparent about the role of ChatGPT in the publication process.

At the same time, scientists have a responsibility to consider the ethical implications of using AI-generated data to draw conclusions and make decisions. It is crucial that researchers ensure the validity and accuracy of the data generated by AI systems before using it for scientific purposes. This includes examining potential biases in the dataset, as well as addressing related issues such as privacy concerns and data ownership. Ultimately, maintaining a high standard of scientific responsibility will be vital in ensuring that AI-generated research is used safely and effectively.

–JZ_20230420_1704

Notes

[1] This text from ChatGPT was generated through the ChatGPT - Genie AI plugin for VSCode on Thursday 20 April, 16.36CEST, using the gpt-3.5-turbo model with the prompt “Write an opinion on how everybody is having opinions on how ChatGPT is being listed as an author on scientific articles,” followed by the prompt “Do the same but include a sentence on agency and one on scientific responsibility.”

References

Last update