AI Futures Project
The Rundown: Former OpenAI researcher Daniel Kokotajlo and the AI Futures Project published “AI 2027,” predicting advancement to superhuman AI within two years, potentially triggering an intelligence explosion with consequences for humanity.
The details:
The report outlines a timeline starting with increasingly capable AI agents in 2025, evolving into superhuman coding systems and then full AGI by 2027.
The paper details two scenarios: one where nations push ahead despite safety concerns, and another where a slowdown enables better safety measures.
The authors project that superintelligence will achieve years of technological progress each week, leading to domination of the global economy by 2029. The scenarios highlight issues like geopolitical risks, AI’s deployment into military systems, and the need for understanding internal reasoning.

Kokotajlo left OpenAI in 2024 and led the ‘Right to Warn’ open letter, speaking out against the AI labs’ lack of safety concerns and whistleblower protections.
Why it matters: While many dismiss AGI and ASI predictions, this forecast comes from researchers with direct, insider experience at leading AI labs. These scenarios suggest we may have only a brief window to ensure AI remains controllable before it surpasses our abilities — making current safety and policy decisions critically important.

Advancing BioSignal Intelligence for personalized health insights.
