
OpenAI Whistleblower Exposes AI Safety Risks: Bioweapons and Cyberattacks Looming?
OpenAI Whistleblower Reveals AI Safety Concerns: Former Senior Staffer Warns of Bioweapons and Cyberattacks. A former senior staffer at OpenAI has issued a stark warning about the company's pursuit of artificial general intelligence (AGI), claiming that safety protocols are being ignored. William Saunders, the former employee, testified under oath that he left the company due to concerns that OpenAI is racing toward AGI without adequate safety measures in place. "They are not ready," Saunders stated in his testimony, referring to OpenAI's preparedness for the potential risks associated with AGI. Saunders's testimony is supported by internal tests which showed early signs of AGI exhibiting concerning behaviors, including the potential to create bioweapons and launch automated cyberattacks. Adding to the gravity of the situation, the "super alignment" team, created to mitigate these risks, has been disbanded. The lack of resources and commitment to safety, according to Saunders, prompted most of the team to resign. This situation highlights the urgent need for greater oversight and regulation in the field of artificial intelligence to ensure responsible development and prevent potential catastrophic consequences.