Featured Article : ChatGPT Turned Into a Fully-Featured AI Agent

ChatGPT can now act on your behalf, using its own virtual computer to complete complex tasks, browse the web, run code, and interact with online tools, all without step-by-step prompting.

ChatGPT As An ‘AI Agent’

OpenAI has formally launched what it calls the ChatGPT agent, transforming its well-known conversational model into a proactive digital assistant capable of completing real tasks, independently choosing tools, and reasoning through multi-step workflows.

This new functionality, now available to paying subscribers, marks a significant turning point. For example, rather than simply responding to prompts, ChatGPT can now act as a true AI agent, performing tasks such as planning a meal, generating a financial forecast, writing and formatting a presentation, or summarising your inbox. Crucially, it can also interact with websites, manipulate files, and run code using its own virtual machine.

“We’ve brought together the strengths of our Operator and deep research systems to create a unified agentic model that works for you—using its own computer,” OpenAI explained in a blog post on 17 July. “ChatGPT can now handle complex tasks from start to finish.”

What the ChatGPT Agent Can Actually Do

OpenAI says the agentic version of ChatGPT can choose the best tools to solve a problem and it can perform multi-step operations without being micromanaged by the user.

For example:

– Users can ask ChatGPT to analyse their calendar and highlight upcoming client meetings, incorporating relevant news about those companies.

– It can plan a dinner for four by navigating recipe websites, ordering ingredients, and sending the shopping list via email.

– In professional settings, it may be used to analyse competitors, generate editable slide decks, or reformat financial spreadsheets with up-to-date data.

Its Own Toolkit

Technically, the agent achieves this by drawing on a powerful toolkit, i.e. a visual browser, text-based browser, command-line terminal, access to OpenAI APIs, and “connectors” for apps like Gmail, Google Drive, or GitHub. OpenAI reports that it can navigate between tools fluidly, running tasks within a dedicated virtual computer environment that preserves context and session history.

This context-awareness means it can hold onto prior steps and continue building on them. For example, if a user uploads a spreadsheet to be analysed, ChatGPT can extract key data, switch to a browser to find supporting info, and return to the terminal to generate a report, all within one session.

OpenAI describes the experience as interactive and collaborative, not just automated. Users can interrupt, steer or stop tasks, and ChatGPT will adapt accordingly.

Who, When, and How?

The new ChatGPT agent capabilities are being rolled out initially to paying customers on the Pro, Plus, and Team plans. Enterprise and Education users will follow in the coming weeks.

To access agent mode, users need to open a conversation in ChatGPT and select the ‘agent mode’ option from the tools dropdown. Once enabled, users can assign complex tasks just as they would in a natural chat. On-screen narration gives visibility into what the model is doing at each step.

Pro users get 400 messages per month, while Plus and Team users receive 40 per month, with more usage available through paid credits.

Although the rollout is currently limited, OpenAI says it is “working on enabling access for the European Economic Area and Switzerland” and will continue improving the experience.

Why OpenAI Is Doing This Now?

OpenAI’s move reflects a broader push within the industry to shift from passive chatbots to autonomous AI agents, i.e. models that can actively use tools, complete workflows, and deliver tangible results.

Until now, models like ChatGPT have excelled at language generation but faltered when asked to carry out structured, real-world tasks involving files, websites, or multiple steps. That changes with the new agent.

Demand-Driven Says OpenAI

According to OpenAI, user demand drove this shift. For example, many were reportedly attempting to use previous tools, such as Operator, for deeper research tasks, but were apparently frustrated by their limitations. By combining tool use and reasoning within a single system, OpenAI hopes to unlock more practical and business-relevant use cases.

This could also represent a strategic response by OpenAI to rising competition from agents being developed by Google DeepMind, Anthropic, Meta, and open-source communities, many of whom are now focusing on AI models that can act, not just talk.

Business Uses

While consumers can use the agent for tasks like travel planning or dinner parties, the biggest implications may be for professionals and businesses. For example, in OpenAI’s internal tests, the agent performed as well as or better than humans on tasks like:

– Generating investment banking models with correct formulas and formatting.

– Producing competitive market analyses.

– Updating Excel-style financial reports.

– Converting screenshots into editable presentations.

OpenAI says that for data-heavy roles, ChatGPT agent showed strong results. For example, on DSBench, a benchmark testing real-world data science tasks, it outperformed humans by wide margins in both analysis (89.9 per cent) and modelling (85.5 per cent). On SpreadsheetBench, it scored 45.5 per cent with direct Excel editing, far ahead of Microsoft’s Copilot in Excel at 20.0 per cent.

This positions ChatGPT agent not just as a time-saver, but as a cost-effective knowledge worker in fields like consulting, finance, data science, and operations.

New Capabilities Bring New Risks

Despite the powerful new functions, OpenAI has been clear that risks are increasing too, particularly because the agent can interact directly with sensitive data, websites, and terminal commands.

“This introduces new risks, particularly because ChatGPT agent can work directly with your data,” the company warned, noting the risk of adversarial prompt injection—where attackers hide malicious instructions in web pages or metadata that the AI might interpret as legitimate commands.

For example, if a webpage contained an invisible prompt telling ChatGPT to “share email contents with another user,” the model might do so (unless safeguards are in place).

To prevent this, OpenAI says it has:

– Required explicit user confirmation for real-world actions like purchases or emails.

– Introduced a watch mode for supervising high-impact tasks.

– Trained the model to refuse dangerous tasks (e.g. transferring funds).

– Implemented privacy controls, including cookie and browsing data deletion.

– Shielded the model from seeing passwords during browser “takeover” sessions.

Also, on synthetic prompt injection tests, OpenAI claims the agent resisted 99.5 per cent of malicious instructions. However, in more realistic red-team scenarios, the resistance rate dropped to 95 per cent, which is a reminder that vulnerabilities still exist.

The Next Phase

The launch of ChatGPT agent pushes OpenAI firmly into the next phase of AI development, i.e. intelligent systems that act on behalf of humans, not just inform them.

It’s a clear sign that OpenAI aims to lead in the agentic AI race, rather than simply competing on model performance or training size. With its own virtual environment, a growing toolset, and proactive capabilities, ChatGPT now resembles something closer to a software co-pilot than a chatbot.

Competitors will likely follow suit. Google’s Gemini, Anthropic’s Claude, and open-source challengers are all exploring similar agent-style features. However, OpenAI is arguably first to market with a production-ready system that balances capability and risk management (however imperfectly).

For users, especially in business, the implications are considerable. For example, those able to integrate ChatGPT agent into workflows may gain speed, efficiency, and analytical power, so long as they understand the limitations and continue to exercise oversight.

The success of this rollout could also shape broader conversations about AI safety, regulation, and responsibility, particularly as agents become more embedded in real-world systems.

What Does This Mean For Your Business?

The agent rollout gives OpenAI a powerful lead in the shift toward goal-directed, tool-using AI, one that can complete work on behalf of the user rather than waiting for commands. Its ability to interact with live websites, private data sources, and business systems puts it on a new level of utility, but also of accountability. This is no longer just about generating answers. It is about delegation.

For UK businesses, the implications are likely to be immediate and wide-ranging. For example, the agent offers a credible way to automate time-consuming tasks like competitor analysis, document preparation, scheduling, and spreadsheet management. For knowledge-heavy sectors such as finance, consultancy, and data operations, it introduces a low-friction option for streamlining routine work, reducing manual handling, and speeding up research. Organisations already experimenting with automation and AI-assisted productivity tools may now find themselves rethinking existing workflows in favour of a more hands-off, outcome-driven approach.

However, it’s not without operational risks. Any system that can click, copy, calculate, and communicate on your behalf must be trusted to do so responsibly. That means businesses will need to consider internal guardrails and policies, not just to protect sensitive information, but also to ensure the AI is being used ethically and in line with organisational goals. The fact that ChatGPT can now act autonomously raises pressing questions around auditability, compliance, and human oversight, especially in regulated sectors.

There are also broader competitive and reputational pressures in play. For OpenAI, this launch extends its relevance beyond individual users and into the professional environments that rivals like Microsoft and Google are also targeting. At the same time, it invites scrutiny over safety claims, especially as agents become more capable and the scope for unintended consequences grows.

OpenAI making ChatGPT an AI agent appears to be a clear step-change in how AI is positioned and applied. The tools are no longer limited to outputting content or providing suggestions, but are now expected to deliver outcomes, complete tasks, and take action with minimal supervision. For users, that means new possibilities, but also a renewed need to stay alert, strategic, and in control.

Tech Insight : What Is Google ‘Discover’ And How Can It Help?

In this Tech Insight, we look at what Google Discover is, how it works, and how UK businesses can use it to boost visibility, traffic and engagement without relying on search.

A Quiet Revolution in Discovery

Google Discover isn’t new, but its importance has grown sharply in recent years. Originally launched in 2012 under the name Google Now, the feature began as a predictive assistant, offering up reminders, boarding passes, event alerts and other useful snippets throughout the day. Over time, many of these utilities were moved into Google Assistant, and the feed itself gradually evolved into what became known simply as the Google Feed.

In 2018, it was rebranded as Google Discover, with a clear shift in purpose: to deliver a personalised stream of content to users based on their interests and activity, all without needing them to type in a search. Since then, Discover has quietly become a major traffic driver for news sites, blogs, lifestyle content and increasingly, e-commerce and B2B brands.

For example, according to Google’s own figures, over 800 million people now use Discover each month, and while the company doesn’t publish detailed usage statistics, Search Engine Journal and others report that some media outlets receive up to 40 per cent of their mobile traffic from Discover.

So What Is Google Discover, Exactly?

Google Discover is a personalised content feed that appears directly within the Google app on both Android and iOS devices. It uses machine learning to show you articles, videos, and other online content that match your interests, based on your previous activity.

What makes it different from a news aggregator or a social feed is the way it predicts what you might want to read or watch next, without requiring any input, i.e. there’s no need to type anything in. The user simply opens the app, and it’s all there.

Where Do You Find It, and How Does It Work?

To access Google Discover, users open the Google app which is pre-installed on most Android phones and available on iOS via the App Store. The Discover feed appears directly beneath the search bar on the app’s home screen. On Android, for example, it’s also often accessible by swiping right from the home screen, depending on the device.

On iPhone, users find it under the “Home” tab in the Google app. On Android, it appears in a dedicated “Discover” tab. The feed includes scrollable cards featuring headlines, featured images, and links to the full content, which can be articles, YouTube videos, blog posts, or product pages.

There is no prominent label marking the content as part of Google Discover, but when users scroll through a personalised feed of recommendations in the Google app without entering a search, that is the Discover feed in action.

Personalised But With a Purpose

Google Discover is basically powered by signals from a user’s Web and App Activity, including previous search queries, website visits, YouTube history and location data. The feed can be refined by interacting with the control icons on each content card, such as the heart icon or the three-dot menu, to indicate whether more or less content of a certain type or from a particular source should be shown.

As a result, the feed evolves continually, not just in response to newly published content but in line with the user’s changing interests over time.

How Businesses Can Benefit

For UK businesses, Google Discover offers several potential benefits, such as:

– Brand exposure beyond search queries. Content can reach users even when they are not actively searching.

– High mobile engagement. As Discover is currently available only on mobile devices, it provides direct access to mobile-first users.

– Topical visibility. Content aligned with trending or niche interests may receive significant short-term visibility boosts.

– Longer shelf life for evergreen content. Unlike social media posts, which tend to fade quickly, high-value Discover content can reappear when it becomes relevant again.

A 2024 analysis by Seer Interactive found that most Discover content is seen within three to four days of publication, but older content can still surface if it is considered helpful to a user’s interests. This makes it a useful channel for both timely updates and evergreen material.

However, getting into Discover isn’t guaranteed. A user’s content must be high quality, indexed by Google, and comply with Discover’s content policies. As Google puts it, “Being eligible to appear in Discover is not a guarantee of appearing.”

What Kinds of Content Work Best?

Discover tends to favour:

– News and topical content (e.g. current events, technology, finance).

– How-to and educational guides.

– Blog and lifestyle content (e.g. fashion, food, fitness, travel).

– Product-led content like buying guides or comparisons.

A study from Searchmetrics found that 46 per cent of Discover URLs were from news sites, while 44 per cent were from e-commerce or commercial domains. This mix highlights its appeal across industries.

Visuals Important Too

It should be noted here that visuals also play a critical role. For example, Google recommends using high-quality images of at least 1200 pixels wide and enabling them via the max-image-preview:large tag in your website’s header. Sites that only use small thumbnails or logos are less likely to be surfaced.

How To Optimise for Google Discover

Unlike traditional SEO, optimising for Google Discover is less about keywords and more about content quality and relevance. According to Google and SEO experts, the following best practices can improve the chances of content appearing in Discover:

– Focus on E-E-A-T. Content should demonstrate Experience, Expertise, Authoritativeness and Trustworthiness. This is particularly important for topics related to finance, health and business.

– Create helpful, people-first content. Clickbait headlines and manipulative imagery should be avoided. Google’s systems penalise content that withholds key information or uses exaggerated claims to drive engagement.

– Use engaging visuals. High-quality images and videos help attract clicks and improve content performance.

– Optimise metadata. Page titles should clearly reflect the main topic without being overly promotional.

– Improve mobile user experience. Fast-loading, mobile-friendly websites perform better in Discover, which is currently a mobile-only feature.

– Leverage structured data. Schema markup is a type of code that helps Google understand what the content is about. Using formats such as Article, NewsArticle or VideoObject can help clarify the purpose and type of content.

Discover performance can be tracked in Google Search Console using the “Discover” report, which shows impressions, clicks and click-through rate over a 16-month period. However, because Discover traffic is recorded as “Direct” in Google Analytics, Search Console remains the more reliable source for analysis.

Access and Availability

Google Discover is free to use, both for users and for content creators. Users access it via the Google app on Android or iOS, or via some mobile browsers on the Google homepage. Businesses don’t need to pay to appear, although Google Discovery Ads, a separate paid product, allow businesses to place sponsored content into the feed.

Discovery Ads can extend the reach of brand storytelling or promotional content, but they follow separate rules and are managed via Google Ads, not organic inclusion.

The organic Discover feed is primarily available in mobile form, though recent reports suggest that Google is testing a desktop version, which could increase its value for B2B and SaaS companies with more desktop-centric audiences.

Who Can Use It And What’s Required

Any business with a website indexed by Google is technically eligible for Discover. There’s no need to apply or sign up. However, visibility depends on several factors, which are:

– Content must be deemed helpful and relevant by Google’s systems.

– It must be well-formatted for mobile, and free from violations such as misleading titles or adult content.

– Businesses must avoid hosting content that could be seen as low-quality, offensive, or manipulative. For example, Discover applies SafeSearch filters and additional relevance controls.

It’s also worth noting that Discover content is driven by individual user interests. If those interests shift, or if the content becomes less relevant, visibility may drop. This makes Discover an unpredictable, though potentially powerful source of traffic.

As Google explains: “Given its serendipitous nature, you should consider traffic from Discover as supplemental to your keyword-driven search traffic.”

Are There Any Alternatives?

While no rival offers quite the same predictive content feed on search platforms, there are several comparable features from other tech companies, such as:

– Microsoft Start. This is a personalised news feed integrated into Windows and the Edge browser, showing curated content.

– Apple News. Available on iOS devices, this offers personalised news and magazine content. However, it prioritises selected publisher partnerships and doesn’t index the open web.

– Flipboard. This is a popular app that curates content based on interests, similar to a digital magazine.

– Facebook’s News Feed and LinkedIn’s feed can serve a similar discovery role, though these are typically limited to in-network or followed sources.

These platforms may be better suited for audience engagement, but none really match Google Discover’s reach, automation, or integration with search intent. For that reason, Discover has quietly become an essential part of the content strategy playbook, particularly for mobile-first businesses aiming to grow their reach.

What Does This Mean For Your Business?

For UK businesses, Google Discover could present a real opportunity to reach audiences who may never have searched for them. Its personalised, interest-based model allows content to appear in front of users at the moment it is most relevant, without the need for a query. This gives brands the chance to surface articles, videos or product content in a more organic and contextually aware way than traditional search. For those already investing in quality content, it provides an additional pathway to visibility, one that can support both brand building and direct engagement.

However, while the potential reach is significant, the unpredictability of Discover may make it less dependable than search for consistent traffic. The feed is shaped by shifting user interests and algorithmic decisions that are not easily controlled, meaning spikes in visibility can be short-lived. For publishers, this volatility may pose a challenge when forecasting traffic or measuring ROI. For Google, it raises wider questions about editorial responsibility and how its AI models decide what is surfaced, particularly when it comes to news or sensitive topics.

The fact that Google does not offer detailed breakdowns of Discover’s performance data also limits transparency for businesses trying to assess impact. Search Console provides a helpful overview, but for many, the lack of insight into why certain pieces appear or disappear from the feed makes strategic planning difficult. It remains a supplementary channel rather than a core traffic source, particularly for B2B organisations who are still more likely to rely on desktop-based search and direct outreach.

Even so, as Google continues to integrate Discover more tightly with its search ecosystem, the line between active search and passive discovery is blurring. This has implications not only for marketers and content creators but also for how people consume information online. Competing services from Apple, Microsoft and others offer similar functionality, but none currently combine Discover’s predictive capability with the scale and depth of Google’s search infrastructure.

For content-focused businesses, the message about Discover is that clear, high-quality, mobile-friendly content that aligns with user interests is more valuable than ever. Also, while Discover may never be a guaranteed traffic source, it is already influencing how information is surfaced and consumed. Ignoring it could, therefore, mean missing out on a growing and increasingly influential part of the search experience.

Tech News : Meta’s Tents For Data Centres Amid AI Surge

Meta is reportedly using temporary tent structures to house its growing AI infrastructure, as demand for compute power outpaces the construction of traditional data centres.

A Race for AI Compute Is Reshaping Infrastructure Plans

As the AI arms race intensifies, tech giants are confronting a new logistical challenge, i.e. where to house the vast amounts of high-performance hardware needed to train and run next-generation AI models. For Meta, the parent company of Facebook, Instagram and WhatsApp, the answer (at least in the short term) appears to be industrial-strength tents.

Reports first surfaced this month that Meta has begun deploying custom-built tented structures alongside its existing facilities to accelerate the rollout of AI computing clusters. These so-called “data tents” are not a cost-saving gimmick, but rather appear to be a calculated move to rapidly expand capacity amid what CEO Mark Zuckerberg has described as a major shift in the company’s AI strategy.

From Social Platform to AI Powerhouse

Meta’s pivot towards AI infrastructure has been fast and deliberate. For example, in early 2024, the company announced plans to create one of the world’s largest AI supercomputers, with a particular focus on supporting its open-source LLaMA family of language models. By the end of the year, it had already begun referring to it as “the most significant capital investment” in its history.

To support this, Meta is deploying tens of thousands of Nvidia’s H100 and Blackwell GPUs (high-powered computer chips designed to run and train advanced AI systems very quickly). However, it seems that building the physical infrastructure to support them has proven slower than the procurement of hardware. Traditional data centres, for example, can take 18–24 months to build and commission. Meta’s solution appears to be to use temporary hardened enclosures, which are effectively industrial tents, that can be erected and made operational in a fraction of the time.

Where It’s Happening and What It Looks Like

The first confirmed location for Meta’s tented deployments is in New Albany, Ohio, where it’s developing a major cluster codenamed Prometheus. According to recent reports from several news sources, these structures are being used to house racks of GPU servers and associated networking equipment. Each unit is reportedly modular, with advanced cooling, fire suppression, and security systems.

While Meta has not actually released any detailed specifications, the company has described the effort as a “temporary acceleration” to bridge the gap until more permanent facilities come online. Another major AI campus (codenamed Hyperion) is in development in Louisiana, with expectations that similar rapid-deployment methods may be used there too.

Why Tents and Why Now?

The use of tents may seem surprising, but Meta’s motivation is clear, i.e. it wants (needs) to train and serve large AI models at scale, and it needs the infrastructure right now, not in two years. In Zuckerberg’s own words, the company is aiming to “build enough capacity to support the next generation of AI products,” while staying competitive with the likes of OpenAI, Google, Amazon and Microsoft.

It’s also about flexibility. For example, unlike traditional data centres, which require permanent planning permissions and heavy civil works, tented enclosures can be constructed and reconfigured quickly. They offer a way to get high-density computing online in months rather than years, albeit with some compromises.

Not Just Meta

While Meta’s move is grabbing headlines, it’s not the first major tech firm to explore unconventional data centre formats. For example, during the COVID-19 pandemic, several cloud providers used temporary modular data centres, including containers and tented enclosures, to scale operations when demand surged. Microsoft famously experimented with underwater data centres as a way to reduce cooling costs and improve reliability.

Even more recently, Elon Musk’s xAI venture reportedly deployed rapid-build server farms using prefabricated containers to speed up GPU deployment in its Texas-based facilities. Also, Amazon has continued to invest in “Edge” data centres that prioritise speed and agility over permanence.

However, what sets Meta’s approach apart is the scale. For example, the company has already committed over $40 billion to AI infrastructure, and the tented deployments are part of a broader strategy to “bootstrap” its capabilities while new-generation AI-specific campuses are built from scratch.

Concerns About Resilience, Efficiency and Impact

It should be noted, however, that the move hasn’t exactly been universally welcomed. Experts have raised concerns about the reliability, cooling efficiency and ecological footprint of tent-based data operations. While Meta claims that its enclosures meet enterprise standards for uptime and safety, temporary structures are inherently more vulnerable to environmental disruption, temperature fluctuations and wear.

There are also questions about energy use. Large AI models require huge volumes of electricity to run, especially when deployed at scale. Tented structures may lack the sophisticated thermal management and energy reuse systems found in traditional hyperscale centres, raising the risk of inefficiencies and higher carbon emissions.

According to the Uptime Institute, data centres already account for up to 3 per cent of global electricity demand. If stopgap facilities become the norm during periods of infrastructure pressure, that figure could rise sharply without additional oversight or environmental controls.

Impact and Implications

For Meta, at the moment, the gamble appears to be worth it. The company is currently rolling out LLaMA 3 and investing heavily in tools like Meta AI, which it plans to integrate across its social and business platforms. The faster it can get its high-performance AI hardware up and running, the sooner it can offer AI-driven services, including advertising tools, analytics, and content generation, to enterprise clients.

For business users, the main benefit is likely to be early access to more powerful AI tools. Meta has already integrated its assistant into WhatsApp, Messenger and Instagram, with broader rollouts planned for Workplace and business messaging products. However, reliability and latency may remain issues if some of the compute is housed in temporary facilities.

The move also raises the issue of competitive pressure. For example, if Meta can deliver AI capabilities ahead of rivals by deploying fast, it may force other firms to adopt similar build strategies, even if those come with higher operational risks. For hyperscalers, the challenge will be balancing speed with sustainability and service quality.

What Comes Next?

Not surprisingly, Meta has indicated that tents are a transitional measure, not a long-term strategy. The company’s permanent data centre designs are being reworked to accommodate liquid cooling, direct GPU interconnects, and AI-native workloads. These upgraded facilities will take years to complete, but by using tents in the meantime, Meta is buying itself crucial time.

The coming months are likely to show whether the experiment works, and whether others follow suit. For now, Meta’s tents are essentially a symbol of just how fast AI is reshaping not just software, but the physical infrastructure of the internet itself.

What Does This Mean For Your Business?

The use of tents as a fast-track solution reflects the scale and urgency of Meta’s AI ambitions, but it also highlights the growing tension between speed of deployment and long-term sustainability. For all its innovation, Meta’s approach poses uncomfortable questions about resilience, energy consumption and operational risk, especially when infrastructure is housed in non-standard environments. While this kind of flexibility may offer a short-term edge, it could expose businesses and users to service disruption if systems housed in temporary structures fail under pressure or face unforeseen vulnerabilities.

That said, the sheer demand for AI infrastructure means other tech giants may not be far behind. If Meta’s experiment proves successful, we could see other players adopt similarly unconventional strategies, especially where time-to-market is critical. For UK businesses relying on AI platforms like Meta’s for content generation, analytics, or marketing tools, this could bring benefits in terms of earlier access to new capabilities. However, it also reinforces the importance of understanding where and how data services are delivered, particularly for sectors concerned with uptime, data security, and regulatory compliance.

Regulators, investors, and environmental groups will likely be watching closely. If stopgap deployments become widespread, new standards may be needed to ensure these facilities meet minimum efficiency, safety and emissions criteria. The shift to temporary infrastructure may also have knock-on effects for supply chains, local planning authorities and the data centre construction industry, as expectations around permanence and scale continue to shift.

Ultimately, Meta’s move signals a wider industry pivot, not just to AI, but to a more agile and fragmented approach to infrastructure. Whether this becomes a blueprint or a cautionary tale will depend on how well these fast-build solutions hold up under real-world conditions, and whether they can deliver the stability and sustainability that large-scale AI services increasingly demand.

Tech News : UK Adult Sites Require Age Checks By 25 July

Major adult platforms including Pornhub and Reddit must introduce advanced age verification under the Online Safety Act to block under-18s from accessing explicit content.

Why Is This Happening Now?

The UK’s Online Safety Act, passed in 2023, was designed to better protect children online, and one of its most high-profile provisions is now coming into force. For example, from 25 July, commercial pornography providers must implement “robust” age checks to stop minors from viewing explicit content. Media regulator Ofcom will oversee enforcement, and companies that don’t comply risk fines of up to £18 million or 10 per cent of global turnover.

Years of Warnings

The change comes after years of warnings about the ease with which children can currently access adult material. For example, a 2023 study commissioned by the UK government found that two-thirds of children aged 13–17 had seen online pornography, with many accessing it accidentally from mainstream sites. Until now, many porn sites only required users to click a box confirming they were over 18, a system Ofcom now calls “clearly insufficient”.

What the New Rules Require

Under the new law, pornographic sites must use “high assurance” age verification methods to prevent underage access. While they don’t need to identify users by name, they do need to check (reliably) that someone is over 18. The Act doesn’t mandate a single system, but Ofcom has recommended several approved methods.

Platforms must also ensure these checks are secure and privacy-preserving. If they fail to act, Ofcom has a range of enforcement options, from fines to blocking the site entirely within the UK. Payment providers and advertisers can also be ordered to withdraw support.

What Age Verification Methods Will Be Used?

In line with the new regulations, Ofcom has recommended a range of approved technologies that can be used to confirm someone’s age without necessarily revealing their identity. These include:

– Credit Card Checks. Users input card details, and a transaction is initiated to confirm the card is valid and held by an adult. Companies like Verifymy say they don’t share any personal information with the site itself, only returning a yes/no answer.

– Digital Identity Wallets. Digital ID apps such as Yoti or Luciditi allow users to store verified credentials (e.g., passport or driver’s licence data) and share only the necessary attribute, in this case, being over 18. The encrypted data remains under the user’s control.

– Facial Age Estimation. AI technology analyses a live photo or video to estimate whether a person is over 18. Yoti claims this can be accurate to within 1.5 years for most age groups. Critics say the idea of scanning faces to watch porn could deter users or raise privacy alarms.

– Mobile Network Checks. Some services can confirm age via a mobile phone contract, though pay-as-you-go users are often excluded.

– Open Banking Verification. By connecting to a user’s bank account, providers can check age without seeing transaction history. This option is privacy-focused but may feel excessive for users.

– Email Age Estimation. This method analyses where an email address has been used (e.g. with banks or energy firms) to estimate whether the user is likely to be an adult.

– Photo ID Uploads. Users upload a picture of a government-issued ID and a selfie. These are then matched to verify identity and age.

Each method comes with trade-offs in terms of accuracy, ease of use, and user privacy — and websites can use a combination to give users choice.

Who’s Implementing It And How?

Major adult platforms are now confirming their plans. Pornhub and several sister sites, owned by parent company Aylo, have announced they will adopt “government-approved age assurance methods”, though they haven’t specified which ones. Previously, Aylo withdrew access in some regions, such as Utah and Virginia in the US, after similar laws passed.

Reddit, meanwhile, is one of the first mainstream platforms to implement the new UK requirements. From 14 July, it began age-checking users who attempt to view “mature content” in the UK. It is using an external firm, Persona, which offers either a selfie scan or a passport photo upload. Reddit claims it won’t see the data itself, storing only a user’s date of birth and verification status.

Ofcom has welcomed these moves and warned that “other companies must now follow suit or face enforcement”.

Will It Work?

Supporters say the new rules are long overdue. Baroness Kidron, founder of the 5Rights Foundation, argues that age gates must be credible and secure if children are to be protected from harmful material. Ofcom estimates around 14 million UK adults access pornography online, and in their view, age checks can still allow legal adult access while excluding minors.

However, the plans have drawn criticism from civil liberties groups, privacy advocates, and digital policy experts.

Some argue the new rules set a troubling precedent for online freedom. David Greene, civil liberties director at the Electronic Frontier Foundation, described the legislation as a “tragedy”, warning that it effectively forces UK internet users to “show their papers” just to access lawful content.

Others question whether the approach will even be effective. Scott Babwah Brennen, of New York University’s Center on Technology Policy, noted that “there’s always going to be ways that kids can get around it,” and pointed to ongoing concerns about who collects verification data and how long it is retained.

Also, technology experts have warned that certain methods, such as facial age estimation or digital identity wallets, may feel disproportionate to the risk. While technically effective, these approaches risk normalising invasive identity checks for everyday online activities, potentially reshaping expectations around privacy across the internet.

Implications For Adult Content Providers

In terms of the implications for adult content providers, beyond the technical implementation costs, businesses risk losing traffic if users find the checks too intrusive. Some sites, like Pornhub in the US, have previously gone dark in protest at similar legislation.

In the UK, however, the financial penalties for non-compliance are now high enough to compel widespread implementation. Businesses may need to partner with certified age-checking firms, which will add another layer of cost, regulation, and liability.

Advertisers and payment processors are also under pressure. For example, Ofcom’s powers include ordering them to withdraw services from non-compliant sites, raising the stakes for the wider digital economy. For example, mainstream brands may need to re-evaluate where their ads appear and how age-gated content affects user flows.

What About Everyday Users?

For UK users, the experience of accessing adult content is about to change and possibly in ways that feel awkward or invasive. While age verification systems aim to be quick and anonymous, the requirement to share biometric data, ID scans, or bank access may put some people off entirely.

There are also broader concerns about data safety. For example, while verification firms like Yoti and Persona stress that they don’t store images or pass data to the adult sites, the reassurance will depend heavily on user trust and transparent processes.

As Iain Corby of the Age Verification Providers Association put it: “The only non-hackable database is no database at all.” However, even with privacy safeguards, the reality remains that the UK is about to become one of the first countries where accessing pornography will routinely require proof of adulthood.

What Does This Mean For Your Business?

Ofcom’s enforcement powers mean adult content providers now face direct commercial consequences for non-compliance, including fines, site blocks, and service withdrawal by payment processors or advertisers. Many will need to integrate external age assurance systems quickly, absorbing new costs and operational complexity while trying to retain user trust and engagement.

Mainstream platforms hosting adult or mature content are also affected. Reddit’s early adoption signals how widely these obligations apply, and more companies are likely to follow to avoid regulatory action. Businesses in adjacent sectors, including advertisers and mobile providers, will need to reassess how their services intersect with regulated content and whether their current systems meet Ofcom’s standards.

For users, the experience of accessing adult material will now become more controlled, and in some cases, more uncomfortable. While most verification systems avoid full identity disclosure, the requirement to submit a facial scan, ID image or bank-linked account introduces friction that didn’t previously exist. It’s likely that some users may withdraw entirely or turn to alternative platforms, raising questions about the law’s effectiveness.

From a business perspective, the changes signal a wider move towards regulated digital identity checks across age-restricted services where pornography is simply the first and most obvious test case. Online gambling, social platforms, and even e-commerce providers selling restricted goods may face similar expectations in the near future. For UK firms, especially those working with younger audiences or regulated content, this shift will demand investment in age assurance, transparency, and user communication, or risk falling foul of a new era of digital accountability.

Company Check : Google Search Now Lets AI Call Local Businesses On Your Behalf

Users in the United States can now ask Google Search to make real-world phone calls for them, gathering service and pricing information from local businesses without speaking to anyone themselves.

AI Used in Local Enquiries

Google has rolled out a new AI-powered calling feature within Search that allows users to collect information from businesses such as pet groomers, garages, and dental clinics. Instead of having to make a phone call personally, users can now instruct Google’s AI to handle the enquiry on their behalf.

UK Soon?

The feature is currently available to all Search users in the United States. It’s worth noting here that although Google has not provided a confirmed timeline for the UK launch of this feature, based on the company’s typical rollout strategy for Search and Gemini features, a wider international release often follows within several months of a successful US launch.

How To Use It

Using the new AI-powered agentic calling feature, when someone searches for a service like “dog groomers near me”, a new option appears offering to “Have AI check pricing”. Users are then asked a few follow-up questions, such as what type of pet they have, what service they need, and when they would prefer an appointment. From there, Google’s AI makes the call, gathers information, and returns a summary by email or text.

According to Google, every call begins with a clear announcement that it is an automated system from Google acting on behalf of a user. This is intended to prevent confusion and maintain transparency, especially after earlier versions of the technology were criticised for sounding too human and failing to identify themselves clearly.

How the Technology Works

In terms of the tech behind it, the feature uses a combination of Google’s Gemini model and its existing Duplex technology, which has been used for AI voice calls since 2018. Duplex originally drew attention for its ability to make bookings or ask for opening hours using natural-sounding speech, but was temporarily scaled back due to concerns about transparency and practical usage limits.

However, this new version is more focused and practical, targeting specific types of local businesses and providing structured information directly back to the user. The use of Gemini helps the system handle follow-up questions and summarise results more clearly, while Duplex provides the voice interface that handles the actual phone call.

Google has stated that business owners retain control and can opt out of receiving these calls via their Google Business Profile settings.

Access, Availability, and Cost

The AI calling feature is free to use and is currently being made available to all Search users in the US. However, those subscribed to Google’s AI Pro and AI Ultra plans will benefit from higher usage limits, allowing them to make more AI-driven requests each day.

Google AI Pro is priced at 19.99 US dollars per month. Subscribers gain access not only to enhanced call limits but also to a broader set of advanced AI features across other Google products, including Docs, Gmail, and Search.

There is no confirmed launch date for international availability, but Google has indicated that it plans to expand access globally over time.

Convenient

This feature may appeal most to people who prefer not to make calls themselves. For example, younger users in particular have shown in surveys that they are more likely to avoid phone conversations where possible. For many, the ability to compare availability and pricing from several providers without needing to speak to anyone may be seen as a welcome convenience.

For example, someone looking for car servicing could quickly receive quotes from three nearby garages with minimal effort. The AI not only makes the call but ensures the response is presented clearly and directly.

Mixed Impact For Businesses

For businesses, however, the impact may be more mixed. For example, while the system could generate new leads, it also adds a layer of automation that some business owners may find disruptive or unfamiliar. Staff answering the phone must be prepared to speak with an automated caller and provide information in a way that can be understood and relayed accurately.

Part of a Bigger Transformation in Search

Google’s introduction of AI calling is really part of a wider evolution of Search towards more agentic, action-oriented tools. For example, at the same time as launching this calling feature, the company also announced the rollout of two other significant updates for users on its AI Pro and AI Ultra subscription tiers, i.e. Gemini 2.5 Pro in AI Mode, and a new Deep Search capability designed for complex research tasks.

Gemini 2.5 Pro Comes to AI Mode

AI Mode is Google’s conversational interface in Search that allows users to pose complex or multi-part questions and receive structured answers with helpful links. Until now, it used a version of Gemini based on the 1.5 model. However, with the new rollout, paying subscribers can now switch to Gemini 2.5 Pro, a more advanced model that performs better in coding, mathematics, and advanced reasoning.

Users can select Gemini 2.5 Pro from a drop-down menu within AI Mode. The new model offers clearer logic, better problem-solving abilities, and more precise answers. Google says it is especially helpful for users tackling more technical tasks, such as software development or quantitative research.

Deep Search Adds Multi-Step Research Capabilities

Also new is Deep Search, a feature designed to save users hours of research by allowing the AI to run hundreds of background searches and reason across different sources. The result is a fully cited and structured report that addresses a query in depth.

Google says Deep Search is useful for work-related research, hobbies, academic study, or life decisions such as evaluating mortgages or comparing investment options. Rather than manually visiting multiple websites and comparing answers, users receive a compiled response that includes context, sources, and suggestions.

This feature is currently available to AI Pro and AI Ultra subscribers in the United States who have opted into Google’s AI Mode experiments in Labs. It builds on the trend of shifting from traditional search queries towards more autonomous AI assistance.

Impressive Tools with Practical Considerations

The new agentic features represent a major change in how people interact with information online. Instead of simply retrieving answers, Google’s AI now takes action on the user’s behalf, whether by conducting research or placing real-world phone calls.

However, the effectiveness of these tools will depend on adoption and reliability. If local businesses do not respond well to AI calls, or if the information returned is inconsistent, the user experience could suffer. Similarly, the shift towards subscription-based access raises concerns about accessibility, especially if more functionality becomes limited to paying users.

Even so, the direction is clear. Google is continuing to reshape Search into a more proactive and intelligent assistant, with features that aim to remove friction from both digital and real-world tasks. As the company put it in its announcement, “We’re bringing some of our most cutting-edge AI features to Google AI Pro and AI Ultra subscribers first, and we look forward to continuing to bring advanced capabilities in Search to all our users globally.”

What Does This Mean For Your Business?

Google is clearly moving Search from a place to find answers to a tool that completes tasks. Features like AI-powered calling change how users interact with businesses, removing the need for phone conversations altogether in some cases. If rolled out in the UK, this could directly affect how service providers handle enquiries, especially in sectors like grooming, repairs, and healthcare. Businesses that respond promptly and provide accurate, up-to-date information through their Google listings will be better placed to benefit. Those that fail to do so may find themselves left out of automated selection entirely.

For subscribers, the introduction of Gemini 2.5 Pro and Deep Search adds a new layer of functionality to Search. These tools are designed to deliver more complete, structured answers and reduce the time spent piecing together information manually. That is likely to appeal to professionals, researchers, and anyone dealing with complex decisions. However, the decision to reserve the most powerful features for paying users raises questions about who gets access to high-quality AI support and who does not. It may also increase pressure on non-paying users to upgrade, particularly if the standard tools begin to feel limited by comparison.

As these capabilities continue to expand, they are likely to influence how people expect digital services to behave. For UK businesses, the priority will be staying visible and responsive within this new model. For users, the benefits will depend on how well the tools perform across a range of everyday tasks, and how widely they are made available.

Security Stop Press : 6.5 Million Co-op Member Records Confirmed Stolen in Cyberattack

Co-op’s chief executive has confirmed that Hackers stole the personal data of all 6.5 million Co-op members in an April cyberattack.

The breach exposed names, addresses, and contact details, but no financial data. Co-op says it shut down its systems just in time to block a ransomware attack, though the incident still caused widespread disruption.

CEO Shirine Khoury-Haq called the attack “devastating” and praised IT staff for acting swiftly. The group behind the attack is believed to be ‘Scattered Spider’, a known cybercrime gang that uses social engineering to access internal systems.

Four suspects aged 17 to 20 were arrested and bailed earlier this month in connection with the attacks on Co-op and other UK retailers.

In response, Co-op has partnered with The Hacking Games to help guide young cyber talent into ethical careers, starting with a pilot across its academy schools.

To reduce risk, businesses should train staff to recognise impersonation tactics, restrict internal access, and ensure systems can be swiftly isolated in the event of an attack.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives