International AI Safety Report. The International Scientific Reporton the Safety of Advanced AI. January 2025
- Liviu Poenaru
- Feb 13
- 1 min read
Feb. 13, 2025
KEY INFORMATION CONCERNING THE LOSS OF CONTROL
• ‘Loss of control’ scenarios are hypothetical future scenarios in which one or more general-purpose AI systems come to operate outside of anyone’s control, with no clear path to regaining control. These scenarios vary in their severity, but some experts give credence to outcomes as severe as the marginalisation or extinction of humanity.
• Expert opinion on the likelihood of loss of control varies greatly. Some consider it implausible, some consider it likely to occur, and some see it as a modest-likelihood risk that warrants attention due to its high severity. Ongoing empirical and mathematical research is gradually advancing these debates.
• Two key requirements for commonly discussed loss of control scenarios are:a. Markedly increased AI capabilities – First, some future AI systems would need specific capabilities (significantly surpassing those of current systems) that allow them to undermine human control.b. The use of those capabilities in ways that undermine control – Second, some AI systems would need to employ these 'control-undermining capabilities,' either because they were intentionally designed to do so or because technical issues produce unintended behaviour.
• Since the publication of the Interim Report (May 2024), researchers have observed modest advancement towards the development of control-undermining capabilities. Relevant capabilities include:
Autonomous planning capabilities associated with AI agents
More advanced programming capabilities
Capabilities useful for undermining human oversight
• Managing potential loss of control could require substantial advance preparation despite existing uncertainties. A key challenge for policymakers is preparing for a risk whose likelihood, nature, and timing remain unusually ambiguous.

Comments