Who Will Investigate AI Hoaxes?

A new LLM tool to generate data allows researchers to fake studies, and we're not anywhere near ready for it

A recent letter in JAMA Ophthalmology details how the authors were able to get GPT-4 with its new Advanced Data Analysis (ADA) model to generate a “seemingly authentic database” that would support a predetermined hypothesis, despite a lack of actual scientific evidence supporting this hypothesis.

Faking a convincing scientific finding has gone from the artisanal to the industrial.

With relative ease, the authors were able to define their level of desired statistical significance (P<0.05), the number of data points they wanted included, how categorical and continuous variables would perform, and which conclusion the resulting data would support.

This post is for paying subscribers only

Already have an account? Sign in.

Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe