Sam Altman, co-founder and CEO of OpenAI, says the company knows how to build artificial general intelligence, expects AI agents to enter the job market by 2025, and is now focused on achieving artificial superintelligence.
Altman shared his thoughts on the company’s progress in a blog postreflecting on events that challenged OpenAI’s governance model and required structural changes. He elaborated on the period in which he was briefly removed from the organization, calling it an oversight failure that challenged core leadership principles. He also recognized individuals who worked behind the scenes to stabilize operations, highlighting how these developments reaffirmed the need for a framework that can accommodate evolving technology and high capital requirements.
Skepticism from the sector about AGI in 2025
OpenAI’s claim that it knows how to achieve AGI, traditionally defined, has drawn criticism and praise. Some experts point to surveys that indicate only a 50% chance of high-level machine intelligence closer to the 2050s. Information Week reported. Others wonder whether breakthroughs in autonomous learning and fully transparent reasoning can occur so quickly. Altman’s timeline for AI agents entering the workforce has sparked debate, though Salesforce is already pushing the concept with its Agentforce product. He states that iterative releases and direct user feedback will improve security while advancing functionality.
According to Altman, OpenAI’s ambitions extend beyond AGI toward superintelligence, a step that some observers see as inevitable for advanced AI systems. Altman wrote that the prospect of super-intelligent tools could lead to far-reaching discoveries in science and engineering. He believes that research into prudent deployment and coordination can address ethical considerations. Many wonder how quickly these technologies will impact global industries, including the crypto markets, where AI-driven automation and analytics are already making waves with current technology.
So skepticism persists around claims about short-term AGI. Critics highlight the challenges in enabling true understanding, cross-domain reasoning, and context awareness. Altman claims that each model update, combined with user interactions, helps refine output and guide safety mechanisms. I personally predicted it AGI would not be realized in 2025 amid other New Year’s predictions, as the gap between current models and AGI appears too wide to bridge in twelve months.
Altman also discussed OpenAI’s shift from pure research to a product-oriented approach. He noted that the original assumptions about capital requirements and theoretical targets would need to be reexamined once user adoption accelerated. He says Iteration reflects OpenAI’s commitment to helping society adapt to emerging tools without holding back development.
Altman concluded that OpenAI’s mission remains consistent despite recent turmoil. He believes that superintelligence will ultimately expand human potential and tackle complex challenges. He added that broad adoption and community feedback will help shape practical safeguards, with many in the AI sector keeping a close eye on OpenAI’s progress.