Testing can’t keep up with rapidly advancing AI systems: AI Safety Report

The findings came as enterprises accelerated adoption of general-purpose AI systems and AI agents, often relying on benchmark results, vendor documentation, and limited pilot deployments to assess risk before wider rollout.

[…Keep reading]

Windows 11 LTSC explained: The OS when you need stability above all

Windows 11 LTSC explained: The OS when you need stability above all

The findings came as enterprises accelerated adoption of general-purpose AI systems and AI agents, often relying on benchmark results, vendor documentation, and limited pilot deployments to assess risk before wider rollout.

Capabilities improved rapidly, but unevenly

Since the previous edition of the report was published in January 2025, general-purpose AI capabilities continued to improve, particularly in mathematics, coding, and autonomous operation, the report said.

Under structured testing conditions, leading AI systems achieved “gold-medal performance on International Mathematical Olympiad questions.” In software development, AI agents became capable of completing tasks that would have taken a human programmer about 30 minutes, compared with under 10 minutes a year earlier.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.