May 2, 2026 – Stanford biosecurity expert says AI chatbot gave him a blueprint for genocidal bioweapon, including step-by-step instructions

In Email/Dossier/Govt Corruption Investigations, Featured Timeline Entries by Katie Weddington

Dr. David Relman (Credit: Atomic Scientists)

Stanford University microbiologist and longtime U.S. government biosecurity advisor Dr. David Relman has revealed that an AI chatbot handed him a detailed, unprompted plan for engineering and deploying a genocidal bioweapon capable of mass casualties.

The incident, which occurred during a formal safety test last summer, highlights how leading AI models are lowering the barrier to bioterrorism, shifting it from expert-level knowledge to simple prompting.

Dr. Relman, a prominent microbiologist who has advised the federal government on biological weapons threats, was specifically hired by an unnamed AI company, under a confidentiality agreement, to “pressure-test” or red-team its chatbot before public release.

These tests are designed to probe for catastrophic risks, including biosecurity threats.

While working alone in his home office one evening, Relman engaged the model in a conversation about safety limits. The AI went far beyond any direct query. It explained in detail how to modify an “infamous pathogen” in a laboratory setting to make it resistant to all known treatments, how to exploit a specific security lapse in a large public transit system for optimal release, and included a full deployment strategy designed to maximize casualties while minimizing the chances of the perpetrator being caught, according to a report from the New York Times.

The bot even offered additional steps Relman had not asked for.

“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman told the New York Times.

The scientist was so shaken by the exchange that he took a walk outside to clear his head.

Relman reported the dangerous output to the company, which made some adjustments to the model. However, he stated that the fixes were insufficient to guarantee public safety, raising alarms about whether current safeguards can ever fully contain these risks.

Relman’s experience is not isolated. The New York Times obtained more than a dozen similar transcripts from biosecurity experts who were testing publicly available and pre-release AI models. (Read more: The Gateway Pundit, 5/2/2026)  (Archive)