A new set of documents known as The OpenAI Files claims to reveal troubling internal dynamics at OpenAI and could shape how the world approaches artificial general intelligence (AGI) governance in the years ahead.
An Urgent Moment for AI Oversight
The release comes at a critical juncture. For example, OpenAI CEO Sam Altman has stated publicly that AGI (AI systems capable of performing most human jobs) is likely to arrive within just a few years. In a February 2024 blog post, OpenAI said it was “quite plausible that AI systems will outpace human expert skill levels in most domains within the current decade.”
Such predictions have fuelled both investment and anxiety. This is because, while the potential productivity gains from AGI are vast, so too are the risks, ranging from misinformation and bias to large-scale unemployment or misuse by malicious actors. However, critics argue that the current leading AI companies, including OpenAI, are operating with too little external scrutiny.
That’s where The OpenAI Files come in. Curated by two US-based non-profit watchdog organisations, i.e. The Midas Project and the Tech Oversight Project, the archive aims to fill a growing accountability gap by exposing how OpenAI’s trajectory has diverged from its original non-profit mission.
Who’s Behind the Archive?
The Midas Project and the Tech Oversight Project describe themselves as independent technology watchdogs. Both are known for promoting stronger corporate accountability in Big Tech and for campaigning on issues such as data privacy, algorithmic bias, and monopoly power.
Their collaboration on The OpenAI Files resulted in a publicly accessible dossier of internal documents, board communications, statements, and media coverage. For example, this includes over 10,000 words of commentary and contextual analysis. The goal, according to the Midas Project, is to “shed light on the ethical and governance failures at OpenAI that have broader implications for AI safety and democracy.”
Has OpenAI’s Founding Principle Shifted?
The central claim of the archive is that OpenAI has quietly shifted from its founding principle, i.e. to build AI that benefits all of humanity, to what is essentially a commercial structure prioritising investor returns. For example, in 2015, OpenAI began as a non-profit with a mission to ensure AGI would be “used for the benefit of all.” However, after introducing a capped-profit model in 2019 to attract investment, and launching high-profile partnerships such as the one with Microsoft, critics say the company has become less transparent and more profit-driven.
The archive also revisits the dramatic 2023 ousting (and rapid reinstatement) of CEO Sam Altman by the OpenAI board. Internal tensions reportedly stemmed from disagreements over safety culture and the pace of development. The board’s lack of explanation at the time, followed by a shake-up that brought in pro-growth allies, raised concerns about whether safety was being sidelined.
One former board member, Helen Toner of Georgetown University’s Centre for Security and Emerging Technology, is quoted in the archive alleging that Altman “withheld information” and “gave inaccurate information” to the board—an assertion he denies.
A Playbook for Responsible AI
Despite the retrospective tone of the materials, rather than being like a post-mortem, The OpenAI Files could be seen as more of a call to action. The curators argue that this transparency can inform a better governance model for AGI, and one that includes:
– Independent oversight of frontier AI companies.
– Binding commitments to public benefit.
– Worker and user representation in decision-making.
– Global cooperation on safety research and risk standards.
The Tech Oversight Project notes: “We need robust regulatory guardrails, but we also need a cultural shift—companies building AGI must be accountable to the public, not just shareholders.”
AI Developers and the Public
If adopted, such reforms would significantly alter how OpenAI and its peers operate. For example, developers may face slower release cycles, stricter testing requirements, and mandatory transparency mechanisms. Companies would also need to re-centre their objectives around public interest, which is something OpenAI once championed.
For users and society, these shifts could bring reassurance that powerful AI tools won’t be developed behind closed doors or guided solely by profit. It could also mean better protection against misuse, clearer redress mechanisms, and fairer access to AI-generated benefits such as job creation, medical breakthroughs, or educational access.
As AGI becomes less hypothetical and more imminent, the stakes are getting higher. In its own 2023 governance update, OpenAI acknowledged: “We don’t expect everyone to trust us by default. We plan to earn that trust.” The watchdog groups may agree but also argue that trust must be backed by verifiable commitments, not just promises.
Other Safeguards and Challenges
The archive’s release adds to growing momentum for external safeguards. For example, in the past year, governments and international organisations have stepped up efforts to regulate frontier AI. The UK held the first global AI Safety Summit in 2023, while the EU has finalised its AI Act, a comprehensive legal framework for high-risk systems. In the US, the Biden administration introduced an AI executive order in late 2023 calling for more audits and red-teaming (testing of system vulnerabilities).
There are also proposals from academics and policy experts for third-party licensing bodies, global AI treaties, and mandatory ethics boards inside AI labs.
That said, change won’t be easy. Major tech firms have pushed back against regulation, warning that overreach could stifle innovation. Critics of The OpenAI Files also point out that the documents reflect selective curation, not an exhaustive or balanced record. OpenAI itself has defended its structure, saying the capped-profit model allows it to raise capital while still pursuing safety goals. “We believe strongly in alignment research and broad benefit,” the company wrote in a recent update, adding that it has made safety “a core focus of our technical agenda.”
Even so, the release has clearly struck a nerve, sparking fresh debate over who should shape the future of AI, and on what terms.
What Does This Mean For Your Business?
The timing of The OpenAI Files places added pressure on AI leaders to re-examine not just their business models but their obligations to society. For OpenAI and others pushing towards AGI, transparency and public accountability are essential (not optional) to maintain legitimacy in the eyes of governments, users, and regulators alike. These archives offer a detailed and accessible case study on how corporate structure, leadership decisions, and investor influence can shift priorities away from public interest. Whether companies accept or resist the lessons outlined remains to be seen, but the conversation is clearly changing.
For UK businesses, the implications are wide-reaching. For example, as AI systems become more capable, more embedded, and potentially more autonomous, their influence on supply chains, labour, customer experience, and regulatory exposure will grow. Businesses may welcome AGI’s productivity gains, but only if they feel the technology is being developed responsibly and without hidden risks. Greater clarity on AI safety protocols, decision-making processes, and ethical frameworks could help smaller firms and public sector bodies feel more confident about adoption. It could also influence procurement choices, data handling policies, and the future of work more broadly.
Also, users, whether individuals or employees, may stand to gain or lose the most. For example, a governance framework focused on ethical leadership and shared benefit could help protect against exploitative uses of AI, ensure wider access to new capabilities, and support democratic oversight as systems grow in complexity and power. That would require sustained effort from policymakers, watchdogs, and AI firms alike, as well as a shift away from the current reliance on self-regulation. The OpenAI Files may not offer all the answers, but they appear to provide quite a detailed starting point for anyone serious about building a future where AGI development is guided by more than market momentum.