

FDA's AI Drug Approver Raises Ethical Concerns
The US Food and Drug Administration (FDA) recently introduced ELSA, a new artificial intelligence (AI) system designed to expedite the review process for new drugs. ELSA analyzes scientific and clinical trial data, aiming to significantly reduce the time it takes to approve medications. While this technology offers the potential for faster access to life-saving treatments, it also raises concerns about the implications of delegating such critical decisions to an AI system. One expert commented, "While ELSA promises efficiency, we need to carefully consider the potential for bias and errors in an AI's judgment that could have serious consequences for patients." The FDA's own June 2025 announcement highlighted that ELSA runs on a secure government cloud platform, but questions remain about the system's transparency and accountability. The agency plans to expand ELSA's use across more departments, raising further concerns about the increasing reliance on AI in healthcare decision-making. The video's discussion of this topic is timely and relevant, highlighting a crucial debate about the intersection of technology and healthcare.