AI Suggests 40,000 New Possible Chemical Weapons In Just Six Hours (theverge.com) 100
An anonymous reader quotes a report from The Verge: It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of "bad actor" mode to show how easily it could be abused at a biological arms control conference. All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence. The Verge spoke with Fabio Urbina, lead author of the paper, to learn more about the AI. When asked how easy it is for someone to replicate, Urbina said it would be "fairly easy."
"If you were to Google generative models, you could find a number of put-together one-liner generative models that people have released for free," says Urbina. "And then, if you were to search for toxicity datasets, there's a large number of open-source tox datasets. So if you just combine those two things, and then you know how to code and build machine learning models -- all that requires really is an internet connection and a computer -- then, you could easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist."
He added: "Of course, it does require some expertise. [...] Finding a potential drug or potential new toxic molecule is one thing; the next step of synthesis -- actually creating a new molecule in the real world -- would be another barrier."
As for what can be done to prevent this kind of misuse of AI, Urbina noted OpenAI's GPT-3 language model. People can use it for free but need a special access token to do so, which can be revoked at any time to cut off access to the model. "We were thinking something like that could be a useful starting point for potentially sensitive models, such as toxicity models," says Urbina.
"Science is all about open communication, open access, open data sharing. Restrictions are antithetical to that notion. But a step going forward could be to at least responsibly account for who's using your resources."
"If you were to Google generative models, you could find a number of put-together one-liner generative models that people have released for free," says Urbina. "And then, if you were to search for toxicity datasets, there's a large number of open-source tox datasets. So if you just combine those two things, and then you know how to code and build machine learning models -- all that requires really is an internet connection and a computer -- then, you could easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist."
He added: "Of course, it does require some expertise. [...] Finding a potential drug or potential new toxic molecule is one thing; the next step of synthesis -- actually creating a new molecule in the real world -- would be another barrier."
As for what can be done to prevent this kind of misuse of AI, Urbina noted OpenAI's GPT-3 language model. People can use it for free but need a special access token to do so, which can be revoked at any time to cut off access to the model. "We were thinking something like that could be a useful starting point for potentially sensitive models, such as toxicity models," says Urbina.
"Science is all about open communication, open access, open data sharing. Restrictions are antithetical to that notion. But a step going forward could be to at least responsibly account for who's using your resources."