The scientific community is being warned by researchers at the Oxford Internet Institute about the potential dangers of using Large Language Models (LLMs) in chatbots to “hallucinate.” These models, which are designed to generate helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact, can pose a direct threat to science and scientific truth.

The paper published in Nature Human Behaviour emphasizes that LLMs are not foolproof and should not be treated as a reliable source of information. They can generate false content and present it as accurate, leading users to believe that responses are factually correct when they have no basis in fact or present a biased or partial version of the truth.

One reason for this is that LLMs often use online sources, which can contain false statements, opinions, and inaccurate information. Users trust LLMs as a human-like information source due to their design as helpful, human-sounding agents. This trust can lead users to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.

The researchers urge the scientific community to use LLMs responsibly and maintain clear expectations of how they can contribute. They suggest using LLMs as “zero-shot translators,” where users provide the model with appropriate data and ask it to transform it into a conclusion or code rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.

While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them wisely and not blindly rely on their output as accurate information.

In conclusion, while large language models (LLMs) have become an essential tool in various fields such as chatbots, natural language processing (NLP), and other AI applications, there is a need for caution when using them. Researchers at Oxford Internet Institute warn about their potential dangers in generating false content and presenting it as accurate, posing a direct threat to science and scientific truth. It is important for users to be aware of these risks and use LLMs responsibly while maintaining clear expectations of how they can contribute.

By Editor

Leave a Reply