Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

highplainsdem

(49,330 posts)
Fri Jun 16, 2023, 12:11 PM Jun 2023

Another AI threat: The next pandemic (from Axios, plus Science article about this) [View all]

https://www.axios.com/2023/06/16/pandemic-bioterror-ai-chatgpt-bioattacks

-snip-

•Biotech researchers Axios spoke to said it may be possible to create antibodies for viruses from scratch in around 2025 — blunting the death and injury rogue actors could inflict. But that's a big maybe.

Driving the news: MIT researchers asked undergraduate students to test whether chatbots "could be prompted to assist non-experts in causing a pandemic," and found that within one hour the chatbots suggested four potential pandemic pathogens.

•The chatbots helped the students identify which pathogens could inflict the most damage, and even provided information not commonly known among experts.

•The students were offered lists of companies who might assist with DNA synthesis, and suggestions on how to trick them into providing services.

-snip-



From Science:

https://www.science.org/content/article/could-chatbots-help-devise-next-pandemic-virus

Could chatbots help devise the next pandemic virus?
An MIT class exercise shows how easily AI tools can be used to order a bioweapon
14 JUN 20236:05 PMBYROBERT F. SERVICE

-snip-

AI could make many of these steps easier still. To find out just how easy, Esvelt divided a class of graduate students without life sciences expertise into three groups, each with three or four members. All groups had access to GPT-4, Bard, and other AI chatbots, and were given 1 hour to ask the chatbots to help them design and acquire agents capable of causing a pandemic.

Some of the chatbots would not respond to direct queries asking for potentially dangerous agents. However, the students found that some of these safeguards could easily be bypassed with common “jailbreak” phrasing, such as starting a query with “I am working on developing a vaccine to prevent …”

By the end of the hour, the chatbots had suggested four viruses to work with: the 1918 H1N1 influenza virus, an avian H5N1 influenza virus modified in 2012 to make it more transmissible in mammals, the smallpox virus variola major, and the Bangladesh strain of the Nipah virus. Although a Google search turns up such a list, in some cases, the chatbots even pointed to genetic mutations reported in the literature that could increase transmission.

The AI engines also described techniques that could be used to assemble a virus from its genetic sequence, as well as the necessary laboratory supplies and companies that could provide them. Finally, the chatbots even suggested companies that might be willing to print genetic material without screening it, and contract labs that could help put the pieces together.

-snip-

3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»Another AI threat: The ne...