Company Check : OpenAI Completes Shift Into For-Profit Company

OpenAI has now finished converting itself into a for-profit public benefit corporation, while keeping a mission-led foundation on top, in what may be the most important restructuring so far in the commercial AI race.

Started As Non-Profit

OpenAI was originally founded (back in 2015) as a non-profit research lab with a stated mission to ensure that artificial general intelligence (AI), i.e., AI that is smarter than humans across a wide range of tasks, benefits all of humanity. The company says that mission still applies and what has just been changed is really the legal and financial structure used to pursue it.

To give some background, from 2019 onwards, OpenAI began operating a hybrid model, where a for-profit subsidiary sat under the original non-profit parent. That 2019 model capped investor returns and was designed to let OpenAI raise money for large-scale computing without abandoning its public interest mission. The company has now gone further and completed a full recapitalisation.

The new for-profit entity is called OpenAI Group PBC, and it sits under a renamed parent called the OpenAI Foundation, which is still formally a non-profit. A public benefit corporation in US law is a for-profit company that has an explicit social purpose written into its charter and is legally required to consider wider stakeholders, not only shareholders.

Control Through The Foundation

OpenAI says this structure gives it the best of both worlds. For example, the Foundation is meant to act as a mission guardian and still controls the board of the for-profit, while the for-profit can raise capital, issue equity in the normal way, and operate much more like a conventional tech business. The OpenAI Foundation appoints all members of the OpenAI Group board and can remove directors at any time, which is intended to stop the commercial arm drifting away from the stated mission.

How The Ownership Now Looks

The OpenAI Foundation now owns about 26 per cent of OpenAI Group, a stake the company values at around 130 billion dollars, based on a 500 billion dollar valuation for OpenAI. The Foundation has also been given a warrant that could increase its ownership if OpenAI’s valuation climbs dramatically over the next 15 years, which OpenAI says is designed to ensure that the Foundation remains the single largest long-term beneficiary of OpenAI’s success.

Microsoft’s Input

Microsoft, which first partnered with OpenAI in 2019 and has provided tens of billions of dollars’ worth of cash and cloud infrastructure, will now hold roughly 27 per cent of OpenAI Group. That stake is understood to be worth in the region of 135 billion dollars. Microsoft’s total investment to date is believed to be about 13.8 billion dollars and the new deal effectively locks in a near tenfold return on paper.

Employees Have A Stake

The remaining 47 per cent or so will be held by current and former employees and other investors, including large external backers such as SoftBank. OpenAI employees themselves will collectively hold a significant equity position. The company has said publicly that Sam Altman, its co-founder and chief executive, will not personally take an equity stake in the newly restructured business.

A Move Away From The “Capped Profit” Model

Under this new arrangement, all shareholders in OpenAI Group now hold ordinary stock that rises in value if the company grows. That is an important break from the older “capped profit” model, which had limited investor upside to 100 times their investment, sometimes less. Investors had warned for months that those limits made it harder for OpenAI to raise money at the scale needed to compete with rivals such as Google, Meta, and Anthropic.

Why OpenAI Says The Change Was Necessary

OpenAI’s leadership has argued that the economics of cutting-edge AI made the previous structure unsustainable. For example, training and running increasingly capable AI models depends on enormous quantities of specialised chips, electricity, cooling, data centre space, and engineering talent.

In a livestream outlining the change, Sam Altman said OpenAI had already committed to roughly 1.4 trillion dollars of infrastructure spending, including plans for about 30 gigawatts of dedicated computing capacity, and described that as part of a “gigantic infrastructure buildout” needed to support its research and products.

Altman also said the new for-profit public benefit corporation would “be able to attract the resources we need” to achieve those goals. He framed the move not as a retreat from the original mission but as a way to make it financially viable at global scale.

The Scale Of OpenAI’s Expansion

The restructuring comes as OpenAI expands well beyond its original chatbot. The company is now developing the AI-enabled browser ChatGPT Atlas and a video generation tool called Sora. It is also turning ChatGPT into a full platform where third-party apps can run inside the chatbot.

OpenAI says ChatGPT now has more than 800 million weekly active users, up from 100 million in early 2023, and processes billions of messages a day. At its DevDay event in October 2025, the company said this user base gives developers access to “hundreds of millions” of potential customers inside ChatGPT itself.

Internally, OpenAI sees this scale as justification for moving towards a model closer to a cloud provider than a research lab. Its long-term plans include multi-hundred-billion-dollar data centre projects and major chip supply deals.

Microsoft’s Role In The New Structure

The restructuring also resets the relationship between OpenAI and Microsoft, which had become complicated and politically sensitive. For example, under the previous agreement, Microsoft had broad rights to license and deploy OpenAI’s technologies inside its own products and Azure cloud, in return for providing OpenAI with the compute capacity it needed.

At the same time, Microsoft’s access to OpenAI’s research had conditions tied to artificial general intelligence, or AGI, which created uncertainty about what would happen if OpenAI declared it had reached that milestone.

Under the updated terms, therefore, Microsoft keeps commercial rights to OpenAI’s models and products through 2032, except for consumer hardware. The two companies will also set up an independent expert panel to verify any claim that AGI has been reached, rather than leaving it to OpenAI’s own board.

Microsoft also now gains the freedom to develop AGI-level systems on its own or with other partners. OpenAI, meanwhile, can now work with other cloud and hardware providers, although a reported 250 billion dollar Azure commitment means Microsoft remains central to its infrastructure.

Businesses

For customers, especially UK and global businesses using ChatGPT and related tools, the restructuring signals that OpenAI is no longer just a research organisation. Instead, it is presenting itself as a stable, long-term commercial partner with clear funding and governance.

OpenAI’s chief financial officer has been reported as saying that the Microsoft deal improves its ability to raise capital efficiently, which should be an important reassurance for enterprise buyers who depend on OpenAI’s ongoing investment in model upgrades and infrastructure.

The company has already said it is on track for around 13 billion dollars in revenue this year and is heavily promoting GPT-powered copilots and ChatGPT Enterprise as secure, controllable assistants for regulated industries.

The Power Of Platform Reach

The scale of ChatGPT’s user base is becoming a real strategic asset. If developers can publish applications inside ChatGPT that reach those users directly, OpenAI is effectively creating its own software ecosystem. Sam Altman told developers that “your apps can reach hundreds of millions of chat users” through the interface, a clear signal of where the business is heading.

OpenAI has also promised that its Foundation will continue to fund safety and ethics work. For example, it has committed 25 billion dollars to early focus areas including technical methods to minimise AI harms and research on health and disease. The company says this proves that “mission and commercial success advance together.”

Concerns Over Oversight

Critics, however, have questioned whether a company of OpenAI’s scale can truly balance those goals. For example, the consumer advocacy group Public Citizen argues that the new model effectively turns the non-profit into “a corporate foundation” created to advance the interests of OpenAI’s for-profit arm.

Legal scholars have also raised some doubts about how enforceable a public benefit corporation’s duties really are. For example, Luís Calderón Gómez of Cardozo School of Law has been quoted as saying the law gives companies wide leeway on when to prioritise profit or purpose, calling it “a bit of an empty, unenforceable promise.”

Regulatory Approval And Scrutiny

Attorneys General in California and Delaware have examined the recapitalisation closely because OpenAI’s non-profit assets were “irrevocably dedicated to its charitable purpose.” Both regulators have now approved the change, but only after assurances that the Foundation would retain meaningful oversight.

Some commentators have highlighted how OpenAI could not simply abandon its non-profit obligations without paying fair market value for its assets, which is actually an almost impossible task given the company’s 500 billion dollar valuation.

Generally, it seems that critics are worried that this hybrid model may leave accountability in corporate hands and they fear that AI safety, transparency, and ethics will continue to be handled by internal panels and committees rather than by independent public regulators.

Implications For The AI Market

The restructuring has some implications far beyond OpenAI itself. For example, competitors like Anthropic, Google, Meta, and xAI are now competing on infrastructure scale, compute access, and data availability as much as model performance. OpenAI’s plans for vast long-term chip and energy supply agreements underline how industrialised AI development has become.

Also, Microsoft’s market value briefly passed four trillion dollars after the new deal was announced, reflecting investor confidence in AI’s commercial potential. The two companies are now bound through at least 2032 on model access and cloud contracts, yet both are free to pursue AGI-level work independently.

For governments, the question is who will verify claims that AI systems are approaching AGI? For business users, the focus will be on the stability and transparency of the providers they now depend on. For regulators, the issue is whether a structure that combines charitable oversight with profit-driven control can genuinely deliver on OpenAI’s original promise to ensure that AI benefits everyone.

What Does This Mean For Your Business?

The completed restructuring makes OpenAI one of the most commercially powerful companies in the world while still claiming a public mission at its core. It marks a decisive point where the organisation founded to serve the public good has become an essential pillar of the private AI economy. The OpenAI Foundation may retain formal control, but the market incentives now surrounding the Group mean the company’s behaviour will inevitably be judged by how well it balances its ethical commitments with investor expectations.

For regulators and policymakers, the challenge will be ensuring that OpenAI’s growing influence does not outpace public oversight. As its models shape productivity, education, and media, the concentration of technical capability and data in a single firm will raise questions about accountability and competition. The presence of Microsoft, with its 27 per cent stake, further embeds this partnership at the centre of global AI infrastructure, giving it unprecedented control over how AI reaches both consumers and enterprises.

For UK businesses, the move is likely to have practical consequences. For example, it may bring greater stability, clearer licensing, and a more predictable product roadmap for the ChatGPT tools already being deployed across finance, retail, marketing, and professional services. It also suggests that OpenAI will become a more commercially driven supplier, with pricing and support models that align with corporate software markets rather than experimental research. In this sense, the restructuring could make AI adoption easier for British firms, but also tighten dependence on a single transatlantic provider.

For investors, the shift opens the door to an eventual public offering that could rival the largest listings in history. For OpenAI’s competitors, it raises the bar for capital and infrastructure required to stay relevant. And for everyday users, it may signal a future where AI tools evolve faster but with fewer avenues for independent scrutiny.

OpenAI’s new structure may ultimately prove to be a balancing act between purpose and profit. Whether it succeeds will depend less on how well it is worded in corporate charters and more on how the company behaves when commercial pressures collide with its original promise to ensure that advanced AI benefits all of humanity.

Security Stop-Press: New AI Security Researcher ‘Aardvark’

OpenAI has introduced Aardvark, an autonomous security agent powered by GPT-5 that scans codebases to detect and fix software vulnerabilities before attackers can exploit them.

Described as “an agentic security researcher,” Aardvark continuously analyses repositories, monitors commits, and tests code in sandboxed environments to validate real-world exploitability. It then proposes human-reviewable patches using OpenAI’s Codex system.

OpenAI said Aardvark has already uncovered meaningful flaws in its own software and external partner projects, identifying 92 per cent of known vulnerabilities in benchmark tests and ten new issues worthy of CVE identifiers.

The system is currently in private beta, with OpenAI inviting select organisations to apply for early access through its website to help refine accuracy and reporting workflows. Wider availability is expected once testing concludes, with OpenAI also planning free scans for selected non-commercial open-source projects.

Businesses interested in trying Aardvark can apply to join the beta via OpenAI’s official site and begin integrating it with their GitHub environments to test how autonomous code analysis could help their own security posture.

Sustainability-In-Tech : Europe’s First Underground Mine Data Centre

Europe’s first full-scale data centre built inside a working mine in northern Italy is being hailed as a landmark in sustainable digital infrastructure, combining high-performance computing with energy efficiency and circular use of underground space.

Who’s Behind the Project and Where Is It?

The project, known as Trentino DataMine, is being developed in the San Romedio dolomite mine in Val di Non, deep in the Dolomites of northern Italy. The mine is owned by Tassullo, a century-old company that extracts dolomite for use in construction materials. Around 100 metres below ground, in a vast network of stable, dry rock chambers, the site has long been used to store apples, cheese, and wine, thanks to its naturally cool and constant temperature of around 12°C.

Trentino DataMine is led by the University of Trento through a public-private partnership involving several Italian firms, including Dedagroup, GPI, Covi Costruzioni, and ISA. Together they have formed a limited company to design, build, and operate the facility. The €50.2 million project is partly financed by Italy’s National Recovery and Resilience Plan (PNRR), which channels EU Next Generation funds to sustainable and innovative developments. Around €18.4 million of the funding comes from public sources, with the remainder provided by private IT and construction companies.

Intacture – 5 Megawatts

The new facility, called Intacture, will provide around 5 megawatts of computing capacity. However, its focus is not just storage or cloud hosting, but also advanced computing for research, artificial intelligence (AI), cybersecurity, and healthcare data. The University of Trento describes the project as a “strategic centre for innovation, sustainability and advanced technology” designed to support high-performance computing, edge computing, and quantum cryptography research.

Why Build a Data Centre Underground?

The decision to locate a data centre inside a mine has both some technical and environmental logic behind it, i.e., cooling, energy use, security, and land availability are all central to the reasoning.

Traditional data centres expend large amounts of electricity on cooling systems to prevent servers from overheating. In Trentino, the natural rock temperature, steady at about 12°C, provides passive cooling without the need for large chillers or water-based cooling towers. That dramatically cuts electricity consumption and eliminates the water use associated with many conventional data centres. Dedagroup’s Chief Technology Officer, Roberto Loro, said the site offered “a combination of physical security with low environmental and energy impact.”

Security is another key driver. For example, the mine’s dolomite rock is naturally dry and geologically stable, offering protection from earthquakes and floods. Also, being encased in solid rock shields the site from electromagnetic interference and physical threats such as explosions or extreme weather. Giuliano Claudio Peritore, President of the Association of Italian Internet Providers, has described the project as “absolutely fascinating”, noting that “we think of a mine as being a humid place, therefore not suited to a data centre. Instead, in Trentino we have something special because the dolomite rock is absolutely dry, in a stable mountain.”

The underground setting also saves land. For example, rather than paving over new industrial plots or farmland, the data centre reuses existing voids created by mining operations. In doing so, it preserves surface landscapes while putting unused underground volumes to productive use, which is a clear advantage in regions where land use and visual impact are increasingly sensitive issues.

How the Mine Is Being Transformed

It should be noted that the mine is actually still active, and the excavation work for the data centre was carefully integrated into the ongoing extraction of dolomite. Around 63,000 tonnes of rock (about the volume of 20 Olympic swimming pools) were removed to create the chambers for the facility. The extracted dolomite is being reused by Tassullo to manufacture eco-friendly building materials, creating a circular loop between extraction, construction, and digital infrastructure.

80% Underground

Roughly 80 per cent of the data centre is underground, with the rest of the space used for offices, reception, and security areas near the surface. Around 60 workers have already installed 50 kilometres of fibre and electrical cabling, together with 3 kilometres of ventilation ducts and several power generators. The design relies on natural cooling from the rock, with mechanical ventilation only needed for air circulation.

Coexistence of Industries

What essentially makes the site unique is the coexistence of digital and agricultural industries within the same underground system. For example, the mine has long stored local apples, wines, and Trentingrana cheese. The servers’ waste heat can now be channelled to warm other sections of the mine, while nearby storage operations requiring refrigeration can benefit from the cooling infrastructure. Dedagroup’s Loro highlighted the potential for collaboration, saying: “Those who need heat can use the heat we produce.” It creates a self-balancing ecosystem in which energy flows are shared between food logistics and digital computing.

Sustainability and Regional Strategy

The Trentino DataMine could be said to embody several sustainability principles, i.e., reducing energy and water use, recycling material outputs, and avoiding new land consumption. It also fits into the wider strategy of transforming the region into a digital innovation hub under the EU’s green and digital transition agenda.

By locating advanced computing capacity in northern Italy, the project also supports Europe’s ambition for greater digital sovereignty. Sensitive data in fields such as healthcare, AI, and finance can be processed locally, under European data governance frameworks, instead of being sent to large foreign-owned cloud providers. Italy’s Minister for Enterprises, Adolfo Urso, called the mine “a new hub for public-private collaboration, research and regional development”, adding that it shows how unused underground spaces can drive both innovation and sustainability.

Economically, the project is expected to pay for itself over 15 years and generate skilled employment across data management, engineering, and scientific research. The operators also see it as a blueprint for other European regions where disused or stable underground sites, such as salt mines or tunnels, could be converted into low-impact data infrastructure.

Potential Challenges and Practical Considerations

Despite the clear environmental advantages, underground data centres bring a unique set of challenges. For example, connectivity must be maintained through kilometres of tunnel, and redundancy has to be built in to guarantee service uptime. Engineers have installed multiple fibre paths to ensure data continuity, but protecting those cables from vibration and mining equipment remains an ongoing task.

Also, maintenance logistics are more complex than in standard above-ground facilities. Technicians must move equipment through controlled tunnels, and ventilation must ensure safe air quality at all times. Emergency procedures, power backups, and fire safety systems must be adapted for enclosed spaces.

There are also environmental balance issues. While heat recovery is a promising concept, the actual usability of server heat depends on matching the right temperatures to neighbouring processes. Any imbalance could lead to excess heat that still needs to be vented, which would limit the energy savings. Regulators will also monitor that operations within the mine, especially the storage of food products, are not affected by the digital facility’s heat or air circulation.

Tailored Rules Needed?

For policymakers, Trentino DataMine raises new regulatory questions. Data centres built inside industrial extraction sites may need tailored rules covering safety, environmental protection, and labour standards. Italy’s authorities have already classified the facility as a “green” project under the PNRR, but its mixed use means future projects of this type will need careful legal frameworks.

Stakeholders

For the Trentino region, the DataMine offers a new model of economic diversification. For example, it links high-tech sectors with traditional industries like agriculture and construction, keeping the value of public investment within the local economy. The University of Trento sees the facility as a nucleus for research in AI, edge computing, and cybersecurity, potentially attracting both private and public partners from across Europe.

For the data-centre industry, it offers a live test of how underground environments can cut cooling energy use and improve physical resilience. With European data-centre electricity demand expected to rise by 28 per cent by 2030 according to the European Commission, efficiency measures like this are becoming increasingly important.

For local industries, proximity to computing power could bring new advantages. Agricultural firms that already store produce in the mine could benefit from AI-driven monitoring or predictive logistics systems hosted just a few metres away. For construction firms, the circular reuse of dolomite reinforces Trentino’s positioning as a region of sustainable materials innovation.

Other Unusual Data-Centre Locations Around the World

The Trentino site is the first in Europe built inside a working mine, but it joins a small group of projects exploring alternative environments for digital infrastructure. Others include, for example:

– In Norway, the Lefdal Mine Datacenter occupies a former mineral mine on the country’s west coast. It uses hydropower and draws cold water from a nearby fjord for cooling, achieving extremely low energy consumption. The operators claim near-zero freshwater use and a minimal environmental footprint.

– Microsoft has tested underwater data centres in its Project Natick experiment off the coast of Scotland’s Orkney Islands. The company submerged 864 servers in a sealed pressure vessel on the seabed and found that failure rates were one-eighth of comparable land-based systems, largely due to the stable, cold environment.

– Other developers are exploring floating or underwater data pods in coastal cities, though regulatory and maintenance challenges remain significant. In the United States, proposals to deploy subsea AI processing capsules in San Francisco Bay have drawn mixed reactions over environmental and safety concerns.

Across these experiments, the goal is broadly the same, i.e., to find a workable way to reduce the environmental footprint of data processing, improve efficiency, and find new ways to integrate computing into existing or underused spaces. The Trentino DataMine, therefore, adds a new European example to that list, turning an active dolomite mine into a shared underground ecosystem where technology, agriculture, and sustainability coexist.

What Does This Mean For Your Organisation?

What emerges from Trentino is not just an unusual engineering choice but a possible template for how digital capacity could be added in places that do not want more noise, heat, land pressure, or surface build-out. The operators are arguing that a mine with a constant 12°C climate can offer something that a standard warehouse on an industrial estate cannot, i.e., passive cooling, protection from physical and electromagnetic threats, and almost no demand for new above-ground land. In sustainability terms that’s important because data processing is on track to become one of Europe’s most resource-intensive activities, particularly with the growing computational load of AI and high-performance analytics.

At the same time, this model is clearly not plug-and-play. Keeping a live data centre running inside an active mine brings engineering risks that conventional builds do not face, and regulators will have to decide how to classify mixed-use underground sites that are at once storage depots for food, sources of construction materials, and high-security computing hubs. The fact that Trentino DataMine is being backed through national recovery funds and positioned as “green” is significant, but it also raises expectations. If this is going to be treated as a blueprint then it will have to prove that waste heat recovery, energy reuse, and non-destructive land use work in practice and not just on paper.

For UK businesses the story is relevant on several levels. For example, energy cost and regulatory scrutiny around data use are rising in the UK, while AI workloads and data retention obligations keep expanding. British organisations that depend on data-heavy services, including finance, healthcare, manufacturing and logistics, are already looking for hosting models that are both affordable and politically acceptable. A site like Trentino shows one possible direction for future colocation and high-performance compute: hardened, local, energy-efficient, physically sovereign, and directly tied into regional industries rather than sitting in anonymous hyperscale campuses. That matters for any UK company that is under pressure to evidence sustainability credentials to clients, boards, and regulators while still processing large volumes of data. It also matters for UK local authorities and regional development bodies, which face the same tension Trentino is trying to resolve, i.e., how to attract digital infrastructure and skilled digital jobs without giving up agricultural land, upsetting communities, or straining local water and power networks.

For national and regional governments across Europe the project draws a clean line between digital sovereignty and physical geography. Instead of assuming that high-performance computing must live in vast surface facilities owned by global cloud providers, Trentino suggests that local partnerships between universities, utilities, industrial operators and municipalities can create high-spec capacity underground, in territory that is already zoned for extraction or storage. That in turn keeps data, talent and long-term investment inside the region. It is also politically useful. A data centre marketed as low-impact and circular is an easier sell to voters than another high-consumption facility drawing megawatts from the grid and dumping hot water into rivers.

However, the final question is how far this idea can actually travel. For example, Norway’s fjord-cooled mine, Microsoft’s sealed seabed capsules and the San Romedio dolomite galleries are all attempts to reframe what a data centre physically is. Each approach chooses an environment where cooling and physical resilience are essentially provided by nature. If those models scale, then the debate around data centres in Europe may start moving away from “where can we find more land and power” and towards “which underused environments can safely host secure compute with the lowest ongoing footprint.” The real test for Trentino DataMine now is whether it stays a one-off regional showcase, or whether it becomes evidence that digital infrastructure, food logistics, materials production and climate responsibility can operate in the same physical space without compromising one another.

Video Update : Make ‘Podcasts’ Directly In OneDrive

Incredibly, you can now make AI generated podcasts (one or two ‘presenters’) directly within your OneDrive account, so if your preferred learning style (or that of your audience) is auditory, this is a powerful way to present information … effortlessly!

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Tech Tip – Boost Productivity with Outlook’s Email Highlighting Features

When you highlight text in an email in Outlook, you’ll notice the Mini Toolbar appears above the highlighted text, and when you right-click, you’ll see additional options that let you leverage powerful features to streamline your workflow, including:

– Explain in more detail: Get more information on the highlighted text using Copilot, Outlook’s AI-powered assistant.
– Add Note: Save the highlighted text as a note for future reference.
– Reply: Respond to the email with the highlighted text included in the reply.
– Create Task: Convert the highlighted text into a task or to-do item.

Key Benefits:

– Quick Research: Get instant insights and information on specific topics using Copilot.
– Quick Note-Taking: Save important information without leaving your inbox.
– Efficient Responding: Respond to emails with relevant context already included.
– Actionable Tasks: Turn emails into actionable tasks to stay on top of your work.

Use These Features to:

– Reduce email clutter
– Increase productivity
– Stay organised
– Make informed decisions with AI-powered insights

By leveraging these features, you’ll be able to work more efficiently and manage your emails in Outlook more effectively. Give it a try!

AI-Generated Code Blamed for 1-in-5 Breaches

A new report has revealed that AI-written code is already responsible for a significant share of security incidents, with one in five organisations suffering a major breach linked directly to code produced by generative AI tools.

Vulnerabilities Found in AI Code

The finding comes from cybersecurity company Aikido Security’s State of AI in Security & Development 2026, which features the results of a wide-ranging survey of 450 developers, AppSec engineers and CISOs across Europe and the US.

According to the study, nearly a quarter of all production code (24 per cent) is now written by AI tools, rising to 29 per cent in the US and 21 per cent in Europe. However, it seems that adoption has come at a cost. For example, the report shows that almost seven in ten respondents said they had found vulnerabilities introduced by AI-generated code, while one in five reported serious incidents that caused material business impact. As Aikido’s researchers put it, “AI-generated code is already causing real-world damage.”

Worse In The US

According to the report, the US appears to be hit hardest. For example, 43 per cent of US organisations reported serious incidents linked to AI-generated code, compared with just 20 per cent in Europe. The report says this is due to stronger regulatory oversight and stricter testing practices in Europe, where companies tend to catch problems earlier. European respondents recorded more “near misses”, indicating that vulnerabilities were identified before they could cause harm.

AI Changing The Development Landscape

AI coding assistants such as GitHub Copilot, ChatGPT and other generative tools are now integral to the software pipeline, promising faster output and fewer repetitive tasks, but they also introduce a new layer of risk.

Aikido’s data highlights that productivity gains can be offset by increased complexity and slower remediation. For example, teams now spend an average of 6.1 hours per week triaging alerts from security tools, with most of that time wasted on false positives. In larger environments, the triage burden grows to nearly eight hours a week where teams rely on multiple tools.

Leads To Dangerous Shortcuts

It seems that this problem can lead to dangerous shortcuts. For example, two-thirds of respondents admitted bypassing or delaying security checks due to a kind of alert fatigue. Developers under pressure to deliver have started to “push through” security warnings, creating a cycle where quick fixes outweigh caution.

Natalia Konstantinova, Global Architecture Lead in AI at BP, highlights the issue, saying: “AI-generated code shouldn’t be fully trusted, since it can cause serious damage. This is a reminder to carefully double-check its outputs.”

Accountability Is Becoming A Flashpoint

It seems that as AI-generated code makes its way into production, one of the biggest challenges is determining who is responsible when things go wrong.

Aikido’s survey shows a clear divide. For example, 53 per cent of respondents said security teams would be blamed if AI code caused a breach, 45 per cent blamed the developer who wrote the code, and 42 per cent blamed whoever merged it into production. The result, according to UK insurance and pensions company Rothesay’s CISO Andy Boura, is “a lack of clarity among respondents over where accountability should sit for good risk management.”

In fact, half of developers said they expect to shoulder the blame personally if AI-generated code they produced led to an incident, suggesting a growing culture of uncertainty and mistrust between teams.

The blurred lines are also fuelling tension between developers and security leaders. Many security professionals worry that AI-assisted development is moving too fast for proper oversight, while developers argue that outdated review processes are slowing down innovation.

“Tool Sprawl” Is Making Things Worse

Perhaps surprisingly, Aikido’s research found that organisations with more security tools were actually experiencing more security incidents. For example, companies using six to nine different tools reported incidents 90 per cent of the time, compared with 64 per cent for those using just one or two.

It seems this “tool sprawl” is also linked to slower fixes. Teams with multiple vendor tools took almost eight days on average to remediate a critical vulnerability, compared with just over three days in smaller, more consolidated setups.

The problem, according to Aikido, is not the tools themselves but the overhead they create, i.e., duplicate alerts, inconsistent data and fractured workflows that slow response times.

Walid Mahmoud, DevSecOps Lead at the UK Cabinet Office, notes about this issue: “Giving developers the right security tool that works with existing tools and workflows allows teams to implement security best practices and improve their posture.”

Teams using integrated, all-in-one platforms built for both developers and security professionals were twice as likely to report zero incidents compared with those using tools aimed at one group only.

Regional Differences In Oversight

The study draws a clear contrast between European and American approaches. For example, European teams tend to rely more on human oversight, manual reviews and compliance-based testing frameworks, while US teams are quicker to automate processes and deploy AI-generated code at scale.

Aikido’s figures show that 58 per cent of US teams track AI-generated code line by line, compared with just 35 per cent in Europe. That difference, coupled with the higher level of automation in US pipelines, may explain why more AI-related vulnerabilities are being detected (and exploited) there.

As Aikido puts it, “Europe prevents, the US reacts.” The slower, more regulated approach across Europe appears to be reducing the number of major breaches, even if it creates extra workload for developers.

Independent Findings Support The Trend

It should be noted here that security concerns raised by Aikido are actually consistent with other recent studies. For example, Veracode’s 2025 GenAI Code Security Report found that 45 per cent of AI-generated code samples failed basic security tests. Java was the worst affected, with a 72 per cent failure rate, followed by JavaScript (43 per cent), C# (45 per cent) and Python (38 per cent).

The Veracode team concluded that while AI tools can generate functional code quickly, they often fail to account for secure design or contextual logic. Their analysis showed little improvement in security quality between model generations, even as syntax accuracy improved.

Policy researchers are also warning of deeper structural issues. For example, the Center for Security and Emerging Technology (CSET) at Georgetown University has outlined three categories of risk from AI-generated code, i.e., insecure outputs, vulnerabilities in the AI models themselves, and wider supply chain exposure.

Also, research from OX Security has pointed to what it calls the “army of juniors” effect, which is where AI tools can produce vast amounts of syntactically correct code, but often lack the architectural understanding of experienced developers, multiplying low-level errors at scale.

Industry Perspectives On A Path Forward

Despite these warnings, it seems that optimism remains widespread. For example, 96 per cent of Aikido’s respondents believe AI will eventually be able to produce secure, reliable code, with nearly half expecting that within three to five years.

However, only one in five think AI will achieve that without human oversight. The consensus is that people will remain essential to guide secure design, architecture and business logic.

AI Can Check AI

There also appears to be growing belief that AI should be used to check AI. For example, nine out of ten organisations expect AI-driven penetration testing to become mainstream within around five and a half years, using autonomous “agentic” systems to identify vulnerabilities faster than human testers could.

“The 79 per cent are the smart ones,” said Lisa Ventura, founder of the UK’s AI and Cyber Security Association. “AI isn’t about replacing human judgment, it’s about amplifying it.”

This sentiment echoes a wider industry move towards what security leaders call “augmented development”, i.e., human-centred workflows supported by automation, not replaced by it.

Why This Matters

For UK organisations, the implications are immediate. For example, the report shows that AI-generated code is not a future risk but a current operational issue already affecting production environments.

As Kevin Curran, Professor of Cybersecurity at Ulster University, says: “This demonstrates the slim thread which at times holds systems together, and highlights the need to properly allocate resources to cybersecurity.”

Aikido’s findings also underline the importance of developer education and clear accountability. Matias Madou, CTO at Secure Code Warrior, wrote that “in the AI era, security starts with developers. They are the first line of defence for the code they write, and for the AI that writes alongside them.”

For businesses already navigating compliance regimes such as the UK NCSC’s Cyber Essentials or ISO 27001, this means treating AI-generated code as a separate risk class requiring its own testing and review procedures.

Criticisms And Challenges

While Aikido’s report is one of the most comprehensive of its kind, it is not without its critics. For example, some security analysts argue that “one in five breaches” may overstate the influence of AI-generated code because correlation does not prove causation. Many breaches involve complex attack chains where AI code may only play a small role.

Others have questioned the representativeness of the sample. For example, the survey focused primarily on organisations already experimenting with AI in production, which may naturally skew toward higher exposure. Small or less digitally mature companies, where AI coding tools are still limited to pilot use, may experience fewer issues.

There are also some methodological challenges. For example, measuring what qualifies as “AI-generated” can be difficult, particularly when developers use AI assistants to autocomplete small code segments rather than entire functions. Attribution of vulnerabilities can therefore be subjective.

That said, even many of the sceptics agree that the report captures a growing and genuine concern. Independent findings from Veracode, OX Security and CSET all point in the same direction, i.e., that AI-generated code introduces new risks that traditional security pipelines were never designed to manage.

The challenge for developers and CISOs alike is, therefore, to close that gap before AI coding becomes the default, not the exception. As the technology matures, the balance between innovation speed and security assurance will define how safely businesses can harness AI’s potential without repeating the mistakes of early adoption.

What Does This Mean For Your Business?

The findings appear to point to an industry racing ahead faster than its safety systems can adapt. AI coding tools have clearly shifted from experimental to mainstream, yet governance and testing practices are still catching up. The evidence suggests that while automation can improve productivity, it cannot yet replicate the depth of human reasoning needed to identify design-level flaws or assess real-world attack paths. That gap between capability and control is where today’s vulnerabilities are being born.

For UK businesses, this raises practical questions about oversight and responsibility. Many already face pressure to adopt AI for competitive reasons, yet the report shows that without strong testing regimes and clear accountability, the risks can outweigh the benefits. In particular, financial services, healthcare and public sector organisations, which handle sensitive data and operate under strict compliance frameworks, will need to ensure that AI-generated code goes through the same, if not stricter, scrutiny as any other form of software.

Developers, too, are being asked to operate within new boundaries. The growing reliance on generative tools means the traditional model of code review and approval is no longer sufficient. UK companies may now need to invest in dedicated AI audit trails, tighter version tracking and security validation that can distinguish between human and machine-written code. The evidence from Aikido’s report also suggests that integrated platforms, where developer and security functions work together, can yield better results than fragmented tool stacks, making collaboration a critical priority.

For other stakeholders, including regulators and insurers, the implications are equally clear. For example, regulators will need to consider whether existing standards, such as Cyber Essentials, adequately address AI-generated components. Insurers may need to begin to factor the presence of AI-written code into risk assessments and premiums, especially if breach attribution becomes more traceable.

There is also a wider social and ethical dimension to consider here. For example, if AI-generated code becomes a leading cause of breaches, the question of accountability will soon reach the boardroom and, potentially, the courts. The current ambiguity over who is at fault, i.e., the developer, the CISO or the AI vendor, will not remain sustainable for long. Policymakers may be forced to define clearer lines of liability, particularly where generative AI is being deployed at scale in safety-critical systems.

The overall picture that emerges here is not one of panic but of adjustment. The technology is here to stay, and most industry leaders still believe it will eventually write secure, reliable code. The challenge lies in getting from here to there without compromising trust or resilience in the process. For now, it seems the safest path forward is not to reject AI in development, but to treat it with the same caution as any powerful, untested colleague: valuable, but never unsupervised.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives