
OpenAI Unveils Framework to Assess and Mitigate AI Risks
A month after CEO Sam Altman's temporary ouster and restructuring, OpenAI has made an important announcement. Criticized for prioritizing OpenAI's rapid development, Altman faced dissatisfaction from board members, employees and investors. In the recently published "Preparatory Framework", OpenAI acknowledges the inadequate scientific study of the destructive risks of AI and introduces measures to fill this gap.
The framework aims to address the risks associated with advanced AI models, particularly those that exceed existing capabilities. A monitoring and evaluation team, which was formed in October, will review these "frontier models" and classify their level of risk, from "low" to "critical", in four key areas.
Also read: Googles powerful Gemini AI leads ChatGPT in the latest benchmarks
Deployment is limited to models with a risk score of "medium" or below. The first category focuses on cyber security, evaluating the model's capacity for large-scale cyber attacks. The second assesses the software's potential to produce harmful materials such as chemicals, organisms (eg, viruses), or nuclear weapons.
Related Article: Intels Latest AI-Enabled Chip Adopted by Dozens of PC Manufacturers
The third category examines the persuasive power of the model, considering its influence on human behavior. Finally, the fourth category addresses the autonomy of the model, examining whether it can overcome the control of its creators. OpenAI's preparedness framework is an important step in managing the risks associated with advanced AI technologies.
Popular articles
Dec 14, 2023 04:51 PM
Dec 08, 2023 01:45 PM
Dec 21, 2023 06:04 PM
Dec 26, 2023 07:15 PM
Dec 08, 2023 07:07 PM
Categories
Comments (0)