pull down to refresh

Ever since I was staffing the House Bipartisan AI Taskforce last year this was a major concern however, at least scientifically/peer reviewed wise AI designed viruses did not have a study pointing directly at the concern. Then last week a group of scientists dropped research which created actual viruses.
The viruses are different enough from existing strains to potentially qualify as new species. They are bacteriophages, which means they attack bacteria, not humans, and the authors of the study took steps to ensure their models couldn't design viruses capable of infecting people, animals or plants.
In the study published in Science (arguably one of if not the biggest places to get your research published) researchers from Microsoft identified and patched an issue that allowed AI to get around safety measures to prevent back actors from ordering toxic molecules from supply companies to start this process. Thankfully, the AI model used is highly specialized and requires not only the knowhow to use it but also particular tools which ensures it stays out of the hands of the average everyday person.
This does highlight though the growing issues facing not just AI companies but governments in ensuring that as publicly available AI models continue to advance they do not allow for this capability. Countless ideas have been offered in to how to prevent this as we are rapidly moving towards a future where AI will have this type of information even if we try to prevent it.
Currently AI cannot be used to create a pandemic causing virus but with the rapid innovation in the space we are running towards that being possible. Sure the US or Europe could enact certain rules, requirements, laws, etc. to try and stop it but this is something that will need the entire world to not only agree to but also enforce.
AI is the greatest double edged sword humanity has created.