The Midas Project's new report evaluates follow-through on the AI Seoul Summit commitments signed by sixteen companies. Here's how they did.
Governments and civil society have a critical role in ensuring that companies aren’t profiting from the reputational benefits of safety promises while failing to follow through on them”
— Jimmy Pham
SAN FRANCISCO, CA, UNITED STATES, February 19, 2025 /EINPresswire.com/ -- At the 2024 AI Seoul Summit, sixteen major AI companies made promises to the governments of the United Kingdom and South Korea, as well as the public at large, to implement “red line” risk evaluation policies for frontier AI models.
However, a new report from The Midas Project has found that the majority
The Seoul Commitment Tracker by The Midas Project offers details on the sixteen leading technology organizations that pledged to implement risk evaluation policies by February 10th, 2025. None of the AI companies in the report earned higher than a “B-” rating. Six organizations earned an “F” rating.
“This assessment highlights both progress and remaining gaps in responsible AI development,” said Jimmy Pham, Program Manager at The Midas Project. “As the industry continues to mature, governments and civil society have a critical role in ensuring that companies aren’t profiting from the reputational benefits of safety promises while failing to follow through on them.”
The new report found that, of the sixteen signatories, ten failed to publicly fulfill every component of the Seoul commitment, including Meta, xAI, and Mistral AI. Across all companies, including OpenAI and Microsoft, the report found that risk thresholds were vague and non-comprehensive and that more work will be needed to strengthen AI risk management practices in the future. Of all the companies in the report that were graded, Anthropic received the highest grade of “B-.”
Still, the report found some strengths in the policies that have been released so far. According to the report, many companies including OpenAI, Google, and Anthropic specifically address serious risks that AI safety experts have warned about, including misuse in the domains of biological weapons and cybersecurity, as well as risks posed in AI deployment such as model autonomy and deceptive alignment.
“The commitments made at the AI Seoul Summit were an extraordinary step forward for responsible AI development. The time has come for each of the signatories to follow through,” said Tyler Johnston, Executive Director at The Midas Project.
For complete details of each organization’s rating, visit the Seoul Commitment Tracker webpage.
The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it. The organization leads strategic initiatives to monitor technology companies, promote transparency in AI development, discourage corner-cutting, and advocate for the responsible development of emerging technologies.
No comments:
Post a Comment