Skip to content

Tools such as ChatGPT threaten transparent science

Tools such as ChatGPT threaten transparent science

“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”

 Ian McDonald, science fiction author

The media blitz of the large language model (LLM) ChatGPT has led to this commentary from the journal Nature about several “ground rules” for its use. The LLMs such as GPT-3 (generative pre-trained transformers-3) of recent years are getting increasingly more human, and thus humans collectively hear the “AI’s footsteps” in the language domain. Perhaps the most concerned are the stakeholders in academia bastions with its long-standing traditions of “original thinking”, albeit sometimes without true innovation.

ChatGPT, developed as an LLM from OpenAI in San Francisco, has reached a new pinnacle of language capabilities with artificial intelligence. It is good enough that it may very well replace traditional search engines very soon as the answers are far more detailed and substantive with a wide range of queries. ChatGPT is sophisticated enough that it has already passed law and medicine examinations and has even generated computer code. ChatGPT has even been granted authorship in the research domain, given its essential role in research projects.

Perhaps with some degree of human hubris (by the way, does AI have hubris?), Nature’s editorial comment (author?) states emphatically that the publication world needs to “lay down ground rules about the use of LLMs” with a tone of self-indulgent paternalism. The two rules are: 

1) no LLM tool will be accepted as a credited author on a research paper 

2) researchers using LLM tools should document this use in the methods or acknowledgments sections

These two rules seem to be a bit of a paradox in that while the Nature journal feels that it needs to know if an LLM is involved, it does not feel that it should get credit as an author or a contributor. 

Perhaps someday we human clinicians and researchers will give our artificial intelligence partners an equal place in our research domain, and learn and teach together as an intimate dyad with mutual trust. Finally, there may be a day when an LLM will monitor human researchers for ground truth.

Read the full paper here

Show Buttons
Hide Buttons