Microsoft Tests Copilot Update That Opens Web Links Inside The App
Microsoft is testing a new Copilot feature in Windows that opens web links directly inside the Copilot app rather than launching the user’s browser, allowing the assistant to display web content alongside AI conversations.
A New Way To Browse With Copilot
The change is part of an update to the Copilot app for Windows that is currently rolling out to users in the Windows Insider programme.
Under the update, when a user clicks a web link during a Copilot conversation, the page opens in a side pane next to the chat window instead of launching a separate browser window. The aim is to allow users to view web content while continuing their conversation with the AI assistant without losing context.
Microsoft said the feature is designed to make it easier to move between information sources and AI assistance during everyday tasks. In a blog post announcing the change, the company explained that when a link is opened, “Copilot opens the content in a side pane next to your conversation instead of a separate browser window, so you don’t lose context.”
Context Across Multiple Tabs
The feature also allows Copilot to work across several web pages opened during a conversation.
With user permission, the assistant can access the context of the tabs opened within that session. This allows Copilot to summarise information across pages, answer questions about multiple sources and help draft text based on what the user is reading.
Microsoft said this capability is intended to support tasks such as research, writing and document preparation, where users often need to combine information from several web pages.
Tabs opened during a Copilot conversation are saved alongside the chat history, allowing users to return to them later when reopening that conversation.
Microsoft explained that the feature allows users to “ask clarifying questions, summarise information across tabs, or ask Copilot’s help in drafting exactly the right words needed for the task.”
Optional Synchronisation Features
The update also introduces optional synchronisation features designed to make the Copilot interface behave more like a browsing environment.
If users choose to enable it, passwords and form data can be synchronised so that websites accessed through the Copilot side pane work more smoothly during tasks such as logging in or completing forms.
Microsoft says this functionality is optional and requires user permission. However, the possibility of synchronising sensitive information inside the Copilot interface may raise questions for some users following recent debates around AI assistants and personal data handling.
How The Technology Works
Technically, the browsing capability appears to rely on Microsoft’s WebView2 framework, which allows developers to embed a Chromium-based browser engine directly inside Windows applications.
This approach enables the Copilot app to display full web pages without launching a separate browser program.
Embedding browsing functionality directly inside an AI assistant also allows Copilot to analyse the information displayed on those pages and respond to questions about it within the same interface.
From Microsoft’s perspective, this integration helps turn Copilot into a more complete productivity environment where web research, reading and writing tasks can all happen in one place.
Concerns From Browser Vendors
The update has also raised questions among some browser vendors and technology observers.
Traditionally, clicking a web link in Windows opens the user’s default browser, along with their preferred settings, extensions and security configurations.
Opening links directly inside the Copilot app could bypass that behaviour by keeping users inside Microsoft’s own application environment. Critics argue that such changes could potentially affect competition among browser providers, although the feature is still in preview and may evolve before a full release.
At the moment, Microsoft has not provided detailed clarification about how the feature will interact with users’ default browser settings once the update becomes widely available.
Insider Testing Phase
The feature is currently limited to Windows Insider builds and is being rolled out gradually across Insider channels.
According to Microsoft, the update is part of a broader effort to improve the Copilot app by making it faster, more reliable and more closely aligned with the latest Copilot features available on the web.
The update also brings some capabilities from Copilot.com into the Windows app, including features such as Podcasts and Study and Learn mode, while other elements may be temporarily removed while the company refines the experience.
As with many Insider previews, the company says the design may change before the updated Copilot app becomes generally available to all Windows users.
What Does This Mean For Your Business?
For organisations using Windows and AI tools such as Copilot, the update highlights how rapidly AI assistants are evolving from simple chat interfaces into integrated productivity environments.
Embedding web browsing directly inside an AI assistant could streamline tasks such as research, writing, analysis and document preparation, particularly when employees need to combine information from multiple online sources.
However, it also introduces new questions about browser behaviour, data access and security policies. Businesses may need to review how AI tools interact with web content, authentication systems and sensitive information, especially if features such as password synchronisation are enabled.
As AI assistants become more tightly integrated with everyday computing environments, organisations will increasingly need to balance productivity benefits with governance, security and compliance considerations.
Company Check : Why Meta Is Being Sued Over AI Smart Glasses
Meta is facing a class action lawsuit in the United States over allegations that its AI-powered smart glasses collected and reviewed sensitive footage in ways users did not reasonably expect, raising new questions about privacy, transparency and the human labour behind modern AI systems.
Meta’s Ray-Ban Smart Glasses
The product at the centre of the controversy is Meta’s Ray-Ban smart glasses, developed in partnership with eyewear manufacturer EssilorLuxottica.
The glasses look similar to ordinary frames but include built-in cameras, microphones and an AI assistant that can take photos, record video, answer questions and analyse what the wearer is looking at. Users activate the assistant with a voice command such as “Hey Meta”.
The system works by sending captured data such as images, voice queries and video to Meta’s cloud infrastructure, where AI models interpret the information and generate responses.
These smart glasses are part of a growing category of wearable AI products designed to act as hands-free digital assistants integrated into everyday life.
What (Allegedly) Happened?
The legal case was filed in the US by two consumers who claim Meta misled customers about how the glasses handle personal data.
The lawsuit argues that Meta marketed the devices using statements such as “designed for privacy, controlled by you” and “built for your privacy”. According to the complaint, these claims gave users the impression that recordings captured through the glasses would remain private.
However, the case alleges that footage collected through the devices could be reviewed by human contractors involved in training Meta’s AI systems.
The complaint also names Luxottica of America, Meta’s manufacturing partner, and claims the companies violated consumer protection laws through misleading marketing.
The Investigation That Triggered The Case
The lawsuit follows an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten.
Journalists interviewed workers at a Nairobi-based outsourcing company contracted to review data captured through the glasses. These workers act as data annotators, labelling images, video and transcripts so that AI systems can better understand real-world environments.
According to the investigation, the review queue sometimes included extremely private material captured by the glasses.
Workers said they encountered footage showing people undressing, using the toilet or engaging in intimate moments, alongside everyday scenes from homes and workplaces.
One worker described the scale of the material by saying: “We see everything – from living rooms to naked bodies.”
The Role Of Human Review
Human review is a common part of how AI systems are trained and improved.
When users interact with an AI assistant, some of those interactions may be reviewed by humans to check that the system is producing accurate results and responding appropriately.
Meta’s own AI terms state: “In some cases Meta will review your interactions with AIs… and this review may be automated or manual (human).”
According to the company, this process helps improve how the glasses interpret images, recognise objects and answer questions about the environment.
However, critics argue that users may not fully realise that recordings captured through wearable devices could enter a review pipeline involving human contractors.
Why Regulators Are Now Involved
The revelations have drawn the attention of regulators here in the UK.
The UK’s Information Commissioner’s Office confirmed it is contacting Meta after the claims emerged. The regulator described the allegations as “concerning” and said organisations developing products that process personal data must clearly explain how that data is used.
A spokesperson said devices that collect personal data should “put users in control and provide appropriate transparency”, particularly where the data may be used to train artificial intelligence systems.
The issue also raises questions about international data transfers. The workers reviewing the footage were employed by a subcontractor in Kenya, meaning data could potentially be processed outside the jurisdictions where the glasses are sold.
What Meta Says
Meta says that media captured by the glasses normally stays on the user’s device unless it is shared with Meta services.
The company also says it uses filtering techniques, including face blurring, to reduce the risk of identifying individuals in reviewed material.
In a statement, the company said contractors may sometimes review content shared with Meta AI in order to improve the experience provided by the glasses.
Meta has also pointed to its privacy policies and terms of service, which describe the possibility of automated or human review of interactions with its AI systems.
Why The Issue Matters
The controversy highlights a broader challenge facing many AI-powered products.
While these systems are marketed as automated technology, they often depend on large networks of human workers who label and review data in order to train AI models.
This hidden workforce is essential to machine learning systems, yet the role they play is often invisible to consumers.
The Meta case also raises questions about how transparent companies should be when marketing devices that capture audio, video and environmental data throughout daily life.
As wearable AI becomes more common, the line between personal devices and surveillance technology may become harder to define.
What Does This Mean For Your Business?
For organisations adopting AI-enabled devices or platforms, the case highlights the importance of understanding how data is collected, processed and reviewed behind the scenes.
AI tools frequently rely on human review processes, particularly during training and quality assurance. Businesses deploying such technologies must consider whether users, customers or employees fully understand how their data might be used.
The case also demonstrates how privacy expectations can quickly become legal disputes when marketing claims appear to conflict with how systems actually operate.
For technology companies, the issue reinforces the need for clear communication about data practices. For organisations adopting AI tools, it underlines the importance of governance, transparency and careful evaluation of how AI systems handle sensitive information.
Security Stop-Press : Data Brokers Selling AI Chat Transcripts
Researchers warn that private conversations with AI chatbots may be ending up in commercial databases sold by data brokers.
It’s been reported that AI visibility researcher Lee S Dryburgh found that some browser extensions marketed as free VPNs or ad blockers can intercept traffic to services such as ChatGPT, Gemini, Claude and DeepSeek, capturing both prompts and responses before they reach the chatbot provider.
Dryburgh’s analysis reportedly uncovered about 490 prompts from more than 435 users across sensitive topics including medical issues, financial problems and immigration questions, with some conversations containing identifiable personal details.
For businesses, the lesson is to void entering confidential information into public AI chat tools and restrict untrusted browser extensions, which can capture sensitive data before it even reaches the AI service.
Sustainability-in-Tech : UK Data Centre Cuts AI Power Use By 40 Per Cent
A UK data centre has demonstrated that artificial intelligence infrastructure can reduce its electricity consumption by up to 40 per cent in response to grid signals without interrupting critical computing workloads.
A UK-First Trial Of Flexible AI Infrastructure
The demonstration took place at Nebius’s “AI Factory” data centre near London and was conducted in partnership with National Grid, Emerald AI, the Electric Power Research Institute (EPRI), and NVIDIA. The project was designed to test whether high-performance AI infrastructure could act as a flexible energy asset rather than a fixed electricity load.
Over five days in December 2025, a cluster of NVIDIA Blackwell Ultra GPUs was subjected to more than 200 simulated grid events. These signals instructed the facility to adjust its electricity consumption under different conditions, including scenarios where the system had little or no advance warning.
According to the project’s white paper, the cluster achieved full compliance with all requested power targets and ramp-rate requirements while maintaining normal operation of key workloads. National Grid Partners described the results as evidence that high-performance AI infrastructure can operate as “a power-flexible, grid-responsive asset without disrupting mission-critical workloads.”
How The System Reduced Power Demand
The trial involved a 130 kW compute cluster running realistic AI training workloads based on open models such as Llama, Qwen and GPT-OSS. The cluster was deliberately kept busy throughout the experiment in order to simulate real production conditions.
Rather than switching servers off, the system reduced electricity consumption by dynamically managing how GPU workloads were scheduled and executed. Lower-priority tasks could be paused, delayed or temporarily slowed, allowing the cluster’s power draw to fall when grid operators requested a reduction.
This approach relies on the nature of many AI workloads. Model training and fine-tuning often run for long periods and include natural pause points, known as checkpoints, where processing can be safely interrupted without losing progress.
By contrast, latency-sensitive tasks such as inference can continue running normally while background training workloads absorb most of the power adjustments.
The orchestration software coordinating this behaviour was provided by US-based AI infrastructure company Emerald AI. Its platform interprets grid signals and automatically adjusts computing workloads so that the data centre can respond quickly to changes in electricity demand.
Testing Real-World Grid Events
Some of the simulated grid signals included immediate reduction requests with no ramp-down period, forcing the system to respond rapidly. Others provided advance warning and allowed the cluster to gradually reduce its consumption.
The trial also modelled real electricity demand patterns. One scenario simulated the well-known “TV pickup” effect in the UK, where millions of households switch on kettles during the half-time break of major football matches or television programmes.
These sudden surges can add around one gigawatt of demand to the grid within minutes. During the simulation, the AI cluster automatically reduced its power consumption as demand increased, demonstrating how data centres could help stabilise electricity networks during peak usage.
National Grid Partners president Steve Smith said the results challenge assumptions about the impact of AI infrastructure on electricity systems. As he explained, “as the UK’s digital economy accelerates, there’s concern that datacentres could add pressure to an already constrained system. This trial proves the opposite can be true.”
He added that the results suggest high-performance computing facilities “don’t have to place additional strain on the grid,” but could instead contribute to more flexible and responsive electricity systems.
Why AI Power Demand Is Becoming A Major Issue
The experiment takes place against the backdrop of rapidly growing electricity demand from AI computing. Training large AI models requires enormous GPU clusters operating continuously, and global data centre power consumption is expected to rise significantly as AI adoption expands.
Grid operators are increasingly concerned that new data centres could strain already constrained electricity systems. In the UK, demand for grid connections has grown rapidly in recent years as developers race to build AI infrastructure.
Traditional data centres are usually treated as “firm loads”, meaning the electricity system must assume they will draw their full power requirements at all times. The London trial explored an alternative model in which data centres act as flexible loads that can temporarily reduce consumption during periods of grid stress.
If implemented at scale, this approach could make it easier for electricity networks to accommodate the growth of AI infrastructure while maintaining grid stability.
What Does This Mean For Your Business?
For businesses building or using AI infrastructure, the trial highlights a possible change in how data centres can interact with energy systems.
AI computing has often been criticised for its high energy consumption, particularly as demand for generative AI services continues to grow. The London trial suggests that AI infrastructure may also offer new tools for managing electricity demand more intelligently.
Flexible computing loads could allow data centres to reduce power consumption during peak demand periods or when renewable energy supply is limited. This could help organisations balance sustainability goals with the growing need for high-performance computing.
However, the model also introduces new operational considerations. Running AI infrastructure as a flexible grid resource requires sophisticated workload management systems capable of pausing or rescheduling non-critical tasks without affecting service levels.
As AI becomes more deeply integrated into business operations, the ability to manage computing workloads in ways that support both performance and energy resilience may become an important part of future data centre strategy.
Video Update : Checkout ChatGPT’s NEW Voice Mode
ChatGPT’s (newest) voice-mode capability is awesome and we think it’ll change the way people interact with their tech – checkout what you can do with it with three examples in this video.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip : Check Which Apps Have Access To Your Google Or Microsoft Account
Many users grant third-party apps access to their Google or Microsoft account and then forget about them, so regularly reviewing and removing unused connections is a simple way to reduce unnecessary access to your business data.
Why This Matters
Applications often request permission to access email, files, calendars or contacts. Over time these connections accumulate as people try new tools, browser extensions or productivity apps.
Some of these integrations are legitimate and useful. Others may be unused or unnecessary, yet still retain permission to access data.
Regularly reviewing connected apps helps reduce risk and improves account security.
How To Check Connected Apps In Google
– Go to https://myaccount.google.com/security
– Scroll to the section labelled Your connections to third-party apps and services.
– Select See all connections.
– Review each app listed.
– Remove access for anything you no longer recognise or use.
How To Check Connected Apps In Microsoft
– Go to https://account.live.com/consent/Manage
– Sign in with your Microsoft account if prompted.
– Review the list of applications that have been granted permission to access your account.
– Select the app you want to review.
– Choose Remove these permissions if the app is no longer required.
If you are using a work or school account, some permissions may be managed by your organisation’s administrator.
What To Look For
When reviewing access, consider:
– Unused productivity tools.
– Old browser extensions.
– Trial software you no longer use.
– Unknown or unfamiliar applications.
Removing these connections reduces the number of pathways that could potentially access your account.
A Practical Approach
Set a reminder to review connected apps every few months. Removing unused integrations keeps your account cleaner and reduces unnecessary exposure to third-party services.