Featured Article : ChatGPT Now Records & Can Access Your Files
ChatGPT now includes meeting recording, cloud integration and deep research tools, marking its biggest push yet into everyday business workflows.
Featured For Everyday Business Use
With over 3 million enterprise-focused customers now using ChatGPT (up from 2 million earlier this year), OpenAI appears intent on securing its place in the core workflow of modern businesses. With this in mind, OpenAI has released some new business-focused features for ChatGPT which are designed to embed ChatGPT more deeply into the kinds of platforms, files, and meetings professionals already depend on.
Update
The latest ChatGPT feature update, designed specifically for paid users across business and education plans, introduces three key capabilities that shift ChatGPT from a smart chatbot to a practical everyday work assistant. These are:
1. Cloud connectors, which let users query documents in platforms like Google Drive or SharePoint.
2. Meeting recording and transcription, available directly inside the ChatGPT (macOS) app.
3. Deep research tools that aggregate and cite information from a variety of business apps and data sources.
It seems that each one has been designed with a view to reducing friction, eliminating app-switching, and (hopefully) helping users access, understand and act upon information more efficiently.
Search Across Your Own Files With Cloud Connectors
One of the most immediately useful additions, ‘Cloud connectors’, means users can connect ChatGPT to leading cloud services. Supported platforms include Google Drive, Microsoft OneDrive, Microsoft SharePoint, Dropbox, and Box.
Once connected, for example, ChatGPT can access stored files like PDFs, Word documents, presentations and spreadsheets, and use that content to respond to user queries. The functionality supports both simple search (“Find last week’s planning document”) and more complex analysis (“Summarise our Q2 sales figures from uploaded reports”).
The connectors operate with full respect for organisational access permissions, i.e. only content which the user is allowed to access is returned, and all files are previewed directly inside the chat for faster referencing.
Who Can Use it?
Cloud connectors are available to Team, Enterprise, and Edu users globally. Pro and Plus users can access them too, except in the UK, Switzerland and the European Economic Area, where availability is restricted for now due to data privacy regulations.
Meeting Recording Giving Structured Notes from Live Conversations
ChatGPT now also includes ‘Record Mode’, the ability to record and transcribe meetings or voice notes, a feature available through its macOS desktop app for Team users. The tool turns spoken content into structured, searchable summaries, all complete with key points, time-stamped citations, and suggested action items.
How?
After a recording is made, the output is saved as a canvas document, which can then be edited, expanded, or turned into emails, project plans or even code. It also becomes part of the user’s searchable knowledge base within ChatGPT.
For example, a team lead could ask: “What did we agree during Monday’s planning meeting?”
ChatGPT would respond with a time-stamped summary pulled from the transcript, thereby saving the need to rewatch the recording or chase colleagues for notes.
Limitations and Availability
Record Mode is only available to users on Team plans using the macOS desktop app. OpenAI says recording sessions can be up to two hours long, and transcripts follow the workspace’s retention policies. Rollout to Enterprise and Edu users is planned, but there’s currently no browser-based option, and speaker diarisation (i.e. who said what) is not yet supported.
Deep Research Connectors For Insights Across Apps
The new ‘Deep Research’ mode allows ChatGPT to produce detailed, cited outputs by pulling together information from internal tools, cloud documents and the web. For example, rather than simply responding to queries in chat, this mode builds more structured research reports that are tailored to a given task.
Supported connectors include:
– GitHub and Linear (engineering and development).
– HubSpot (CRM and marketing).
– Google Drive, Gmail and Calendar.
– Microsoft Outlook, SharePoint, Teams and OneDrive.
Typical use cases could include reviewing recent project work, summarising customer conversations, or combining internal product documents with external market insights.
Users can even export the result as a professionally formatted PDF, with tables, links and citations included.
Who Gets Access?
Deep Research is available to Pro, Plus, Team, Enterprise and Edu users, excluding the UK, EEA and Switzerland. There’s no Free tier access, and setup varies by platform, but some connectors may require authentication, while others are pre-approved by admins.
Model Context Protocol (MCP)
For businesses with custom internal systems or industry-specific data, OpenAI now supports the Model Context Protocol (MCP). This allows technical teams to build their own connectors that link ChatGPT to virtually any structured data source.
For example, these custom connectors can retrieve internal information, such as customer records, billing data or support tickets, and allow ChatGPT to query it as part of a deep research task. The results are combined with public data and other connected apps to create cohesive reports.
Access and Setup
It should be noted that MCP is only available for Pro, Team, Enterprise and Edu customers. Admins are responsible for deploying MCP connectors via a remote server. Once approved, they become available across the entire workspace.
This feature may be particularly useful for large organisations looking to integrate ChatGPT into existing business intelligence systems or build AI-powered internal knowledge tools.
Practical Use Examples
Some examples of how these features could be used to support everyday business tasks include:
– A sales manager using HubSpot data to analyse deal close rates across regions.
– A product owner recording a team call and using ChatGPT to generate a roadmap summary.
– An analyst asking ChatGPT to pull data from Dropbox and Google Drive to create a performance report.
– A developer linking GitHub to summarise pull requests or past sprint changes.
In each case, using these new features, ChatGPT can act as a kind of AI research assistant that’s able to pull from multiple sources, remember context, and suggest outputs tailored to the task.
What About Security, Privacy and Control?
With these new features, it seems that OpenAI has taken some steps to address enterprise concerns around data usage and privacy. For example, OpenAI is keen to point out that:
– Data from connectors and Record Mode is not used to train models for Team, Enterprise and Edu users.
– Audio recordings are deleted immediately after transcription.
– All connector access is opt-in and user-authenticated, and connectors only search files that users have permission to view.
– Admins can restrict or disable access to specific tools through workspace settings.
However, for users on the Free, Plus or Pro plans, OpenAI may use data from connectors to train its models if the “Improve the model for everyone” setting is enabled. Businesses on these plans may need to check this setting to ensure it aligns with their data policies.
A Step Ahead of the Competition?
This move from OpenAI looks like positioning ChatGPT as a serious contender in the growing race for AI-powered productivity. While Microsoft’s Copilot and Google’s Gemini already integrate tightly with their own ecosystems, ChatGPT offers something different: broad compatibility with multiple tools, deep natural language understanding, and cross-platform flexibility.
Smaller players like Notion, ClickUp and Zoom have also added AI-powered summaries or transcription features in recent months but it seems that OpenAI’s latest update offers a more expansive set of capabilities in one interface, provided companies are willing to integrate their workflows.
There are also signs that OpenAI may expand these features further. For example, the company’s documentation notes that more connectors are in development, and that browser support for Record Mode and broader language transcription are both on the roadmap.
What Does This Mean For Your Business?
It looks as though businesses currently using ChatGPT on a Team or Enterprise plan could stand to gain some immediate, practical benefits from this update. For example, being able to search internal documents, capture meetings with actionable summaries, and generate reports from connected tools could help teams cut down on duplication, reduce time spent switching between apps, and improve the speed and quality of decisions. For knowledge-heavy sectors such as finance, legal services, software development or consultancy, these tools offer a way to bring routine research and documentation tasks under one roof.
However, the regional limitations are hard to ignore. For example, it seems that businesses in the UK, the EEA and Switzerland on Pro and Plus plans are currently excluded from using many of the new connectors and deep research features. While this is due to data privacy rules, it still creates inconsistency for organisations with teams in multiple countries, and may affect uptake in regulated industries unless a clearer roadmap for availability is published.
For others in the AI productivity space, the implications are also significant. For example, OpenAI’s approach of building connections into widely used tools like Google Drive, Outlook and HubSpot allows ChatGPT to operate more flexibly across mixed tech environments than many rivals. Microsoft and Google still have the advantage of full-stack integration, but this update increases the pressure on them to improve openness and compatibility. Smaller platforms like Notion, Zoom and ClickUp, which have been quick to adopt AI features, may struggle to match the breadth of this offering unless they build similar connector frameworks.
It seems, therefore, that what happens next may come down to usability and trust. If OpenAI can make these features accessible without excessive setup, and if organisations are confident in the way their data is handled, ChatGPT could become far more than a clever chatbot. It could start to take on the role of an always-available assistant and, crucially, one that understands the context, connects the dots, and can work quietly behind the scenes to keep teams informed, aligned and productive.
Tech Insight : Microsoft Deleting Saved Passwords From Auth App
Microsoft is warning users that saved passwords will soon be deleted from its Authenticator app, as it phases out the feature in favour of Edge and passkeys.
Major Changes Coming to Microsoft Authenticator
Millions of Microsoft users are being urged to take action ahead of a planned overhaul to the Microsoft Authenticator app. Microsoft says that from August 2025, the app will no longer store or provide access to saved passwords. The change is apparently part of Microsoft’s wider push towards a passwordless future and will directly impact individuals and businesses who rely on Authenticator to manage credentials.
The phased retirement of password and autofill functionality in the app begins this month (June 2025) and ends with permanent deletion in August.
Improved Security and Streamlining
Microsoft says the move is intended to improve account security and streamline its identity tools, but critics have raised concerns about user disruption and the company’s growing dependence on its Edge browser.
What Exactly Is Happening And When?
According to Microsoft’s official support documentation, the changes will roll out in three key stages:
– From June 2025, users will no longer be able to add or import new passwords into the Authenticator app. The app will still autofill existing saved passwords for a short time.
– During July 2025, the autofill feature will be fully disabled, and any stored payment information will be deleted from user devices.
– From August 2025, all previously saved passwords will be permanently inaccessible in the Authenticator app. Any passwords generated through the app but not saved will also be lost.
Microsoft is, therefore, urging users to export their passwords before the August deadline or risk losing them permanently.
Microsoft’s Password Problem
At the heart of the decision is the fundamental issue that passwords are no longer seen as being secure. For example, Microsoft’s internal data suggests the scale of the threat has worsened. In a blog post published last December, the company said it was blocking an average of 7,000 password attacks per second, nearly double the rate from the previous year. Phishing campaigns, brute-force attacks, and credential stuffing continue to rise.
As the blog noted, “Bad actors know [passwords are dying], which is why they’re desperately accelerating password-related attacks while they still can.”
It should be noted here that Microsoft is not alone in this assessment. For example, data from the FIDO Alliance shows that over 35 per cent of people have had at least one online account compromised due to password vulnerabilities. Meanwhile, 54 per cent of those familiar with passkeys say they’re more convenient than passwords, and 53 per cent say they’re more secure.
It seems that Microsoft sees this moment as an opportunity to transition users to more modern authentication methods, particularly passkeys, i.e. credentials tied to biometric data like fingerprints or facial recognition, which are less vulnerable to traditional forms of hacking.
A Nudge Towards Microsoft Edge
In practical terms, Microsoft is also consolidating its password management services under its Edge browser. Users who still want Microsoft to handle their credentials are being directed to switch to Edge, where passwords, addresses, and other autofill data can be securely stored in their Microsoft account.
A new splash screen in the Authenticator app now encourages users to “Turn on Edge” for this purpose. Microsoft notes that passwords are synced with the user’s Microsoft account and can be accessed by signing into Edge, where they are stored under Settings > Passwords.
This change isn’t just about security. It’s clear that this move is also designed to help strengthen Microsoft’s long-standing campaign to increase adoption of its browser. As part of this push, password autofill services are no longer available through Authenticator in Chrome, Safari, or other third-party browsers. Users who don’t want to use Edge are advised to export their passwords and switch to an alternative password manager such as Google Password Manager or iCloud Keychain.
What About Passkeys and 2FA?
Although password storage is being removed, the Microsoft Authenticator app itself isn’t going anywhere. It will continue to support two-factor authentication (2FA), including time-based one-time passwords (TOTP) and biometric logins.
More importantly, Authenticator will remain central to Microsoft’s passkey system. If users have already enabled passkeys for their Microsoft account, they must keep Authenticator enabled as their designated passkey provider. Disabling the app may break access to those accounts.
Passkeys “Superior”
Microsoft says passkeys offer a “superior user experience” by enabling faster logins that are resistant to phishing and replay attacks. But the technology is still in early stages, and many websites and systems, especially in the enterprise world, have yet to adopt it widely.
What Users Need To Do
For individual users, the priority is clear, i.e. export any saved passwords from Authenticator before 1 August 2025. Microsoft warns that any unsaved credentials will be lost, and payment details stored in the app will be deleted by July.
To keep using Microsoft’s ecosystem, users can set Microsoft Edge as their autofill provider on iOS or Android. Those wanting to move to a different platform must export their credentials, then import them into the new tool.
More Complex For Business Users
However, as may be expected, it seems that business users, especially those in IT administration roles, face more complexity. This is because many organisations use Authenticator not only for employee 2FA, but also as a password vault for accessing internal systems and client accounts. The removal of this functionality could lead to operational disruption if not properly managed.
Enterprises will, therefore, need to review whether Edge is suitable across their environments, or whether to transition to third-party tools like Keeper, 1Password, LastPass, or Bitwarden, and others, many of which offer team vaults and admin controls.
Microsoft has published step-by-step guides for exporting credentials from the app and importing them into Edge. However, the company also warns that when exporting passwords, they are no longer encrypted in transit. Users must delete the exported file immediately after import to avoid exposing sensitive information.
Criticism and Concerns
Despite the security rationale given by Microsoft, the move hasn’t gone without criticism. For example, some users see it as an aggressive tactic to push people towards Microsoft Edge. Others are concerned about losing the flexibility that came with Authenticator’s cross-browser compatibility.
The change also comes at a time when Microsoft has faced growing scrutiny over its handling of security. Recent phishing campaigns targeting Microsoft accounts have used Google Apps Script to host realistic-looking fake login pages, tricking users into entering credentials. By removing password storage and advocating for passkeys, Microsoft is positioning itself as proactive, but some argue the change is reactive to recent threats.
Also, many IT professionals, including managed service providers (MSPs), have expressed reservations about using browsers to store sensitive information such as passwords. While Microsoft maintains that Edge is a secure, enterprise-grade browser with built-in defences like Defender SmartScreen and Password Monitor, it remains the case that most security-conscious businesses recommend dedicated password managers instead.
Some MSPs, for example, point users towards platforms like Keeper, which offer stronger access control, audit trails, and encryption options tailored for business environments. Even mainstream alternatives like LastPass (once widely used) have lost trust following a high-profile security breach in 2022, which saw attackers steal encrypted vault data. This has left many in the industry sceptical of relying solely on browser-integrated tools for credential storage.
As a result, it seems that IT teams now face a more difficult decision. Microsoft’s advice to migrate to Edge may be convenient, but it is unlikely to satisfy organisations with strict compliance policies, high-value systems, or users working across multiple platforms. For many, this change serves as a prompt to reassess their overall password and identity management strategy—and not simply swap one tool for another.
Also, it should be noted that, quite simply, not all users or organisations are ready for a passwordless future. Adoption of passkeys remains patchy, and migrating authentication systems requires time, budget, and user training. For small businesses or non-technical users, these changes may be frustratingly complex.
Microsoft appears to be aware of these challenges but remains committed to the transition. As the company put it, “The password era is ending”—and with password-based attacks continuing to rise, the shift may be less about convenience and more about survival.
What Does This Mean For Your Business?
The next few months may be critical for users and organisations who rely on Microsoft Authenticator for password storage. While the company has made its intentions clear and set out a defined timeline, the practical implications are not quite so straightforward. Users will need to act quickly to export their credentials, and those choosing to remain within Microsoft’s ecosystem will need to familiarise themselves with Edge’s autofill features. For many, this will simply be a matter of adjustment. However, for others, particularly in business environments where systems, devices and browsers vary, the change raises more complex operational and security considerations.
For businesses, the impact could be significant. Many will now be forced to re-evaluate how they manage shared logins, administrative access and compliance-sensitive credentials. Microsoft’s preference for its own browser may not align with existing IT policies, particularly in organisations where Chrome or Safari is the standard. Also, while Microsoft promotes Edge as a secure alternative, longstanding guidance from many managed service providers in the UK still discourages storing passwords in any browser. Instead, tools like Keeper (there are other tools), favoured by many MSPs for their advanced controls and business-grade encryption, are often recommended as more robust alternatives.
At the same time, Microsoft’s strategy seems to reflect a wider shift that is now shaping the security landscape. Passwords have long been a weak point, and with attack volumes rising year on year, the company’s decision to pivot towards passkeys is consistent with broader industry trends. However, the reality is that many businesses, especially smaller ones, are not yet equipped to make this leap. Compatibility gaps, legacy systems, and limited resources all present barriers to adoption. Without careful planning and communication, the risk is that essential authentication processes could be disrupted or improperly migrated.
What’s clear in all this is that Microsoft is pushing ahead regardless. By retiring password storage from Authenticator and tying remaining functionality to Edge and passkeys, the company is accelerating a shift that many see as inevitable. Whether this benefits users in the short term may depend less on Microsoft’s vision and more on how quickly organisations can respond, adapt and put the right alternatives in place. For now, IT teams will need to weigh the convenience of Microsoft’s path against the operational demands and risks that come with changing how people log in.
Tech News : Autofocus Glasses Now & Printed Kidneys Soon
Two European startups have developed groundbreaking tools, including real-time autofocus glasses that adjust to where you look and a bioprinted wound patch for pets that could lead to printable human organs.
Smart Glasses That Adapt As You Look
Finland-based IXI has developed what it claims are the world’s first real-time autofocus prescription glasses. Rather than adding smart features like cameras or displays, the company focuses on correcting vision more naturally.
Founded in 2021 by imaging and optics specialists Niko Eiden and Ville Miettinen, IXI recently raised \$36.5 million in Series A funding. Backers include Amazon’s Alexa Fund and several major European venture firms. The money will support the final development of the company’s first commercial product, IXI Adaptive Eyewear.
How It Works
The system uses a low-power eye-tracking sensor and liquid crystal lenses. When the user shifts focus, the glasses detect eye movement and adjust the lens shape in real-time, typically within 0.2 seconds. The liquid crystals change how they bend light, automatically matching the user’s focal distance.
All the electronics fit within a standard frame, and battery life is expected to last two days, with the lenses reverting to a fixed prescription mode if the power runs out.
Aimed at Replacing Progressive Lenses
IXI has targeted the new glasses at those with presbyopia, which affects the eye’s ability to focus on nearby objects and typically appears from age 40. Whereas traditional progressive lenses have narrow reading zones and peripheral distortion, IXI says its dynamic lens offers full-field clarity without these trade-offs.
The Next Phase In Eyewear Development?
A key part of IXI’s pitch appears to be that dynamic lenses represent the next logical phase in eyewear development, not just an upgrade to existing products. “Whether it’s us or another company, somebody will crack it,” said CEO Niko Eiden. “The step from static to dynamic lenses is a natural evolution.”
Market and Availability
The global eyewear market is currently worth more than $200 billion and growing at 8–9 per cent a year. IXI, therefore, hopes to launch a consumer-level product targeting older professionals and others frustrated with current lens limitations. No pricing or release date has yet been confirmed, though live demonstrations are expected later in 2025.
What Needs to Improve
Despite the obvious advantages of the new lenses, key technical challenges remain. For example, IXI must deliver all-day comfort, eliminate lens haziness, and meet medical-grade optical clarity standards. Transparent electronics, battery miniaturisation and long-term durability are also all active areas of R&D.
There’s also the threat of competitors emerging. For example, French startup Laclarée and Japan’s Elcyo are also working on autofocus eyewear, though neither has launched a product. Tech giants like Meta and Apple are investing in glasses too, yet their focus remains on augmented reality rather than vision correction. IXI is aiming to fill the gap in between.
Bioprinted Patches for Pets Could Lead to Human Organs
Meanwhile, Lithuanian startup Vital3D is tackling tissue regeneration. Its first commercial product, VitalHeal, is a laser-printed wound patch designed for dogs. The patch embeds growth factors and features microscopic pores that allow air circulation while blocking bacteria.
Vital3D says the technology could eventually be used to bioprint transplantable human organs, but that this goal is likely still 10 to 15 years away. The company has deliberately started with simpler applications in veterinary care as a stepping stone towards more complex human use.
Faster Healing, Lower Risk
The patch is designed to seal wounds, maintain pressure, and accelerate healing. Vital3D claims it can reduce recovery time from 10–12 weeks to just 4–6 weeks. Infection rates could fall from 30 per cent to below 10 per cent, and the number of vet visits required may drop significantly.
The retail price is €300 per patch (€150 wholesale). While more expensive than standard dressings, it could cut overall treatment costs by reducing surgery time and complications.
A Technology Platform, Not Just a Patch
Vital3D uses Two-Photon Polymerisation, a high-resolution laser-based printing technique. Its patented FemtoBrush system allows the laser beam to dynamically change shape during printing, enabling both fine detail and larger structural areas in the same build. The system can print features as small as one micron, within a build volume of 50 x 50 x 100 mm.
The company’s long-term goal is to print implantable human kidneys. For now, the priority is building the core platform by addressing major challenges such as creating vascular networks and supporting cell differentiation. Dog trials are scheduled to begin this year in Lithuania and the UK.
Competing Efforts and Future Outlook
Vital3D is one of several companies working on bioprinted tissue, though it is taking a notably commercial route into the market.
Other startups in this space include US-based Trestle Biotherapeutics, which is developing kidney tissue for research and transplantation, and Sweden’s CELLINK, which builds bio-inks and printing systems for soft tissue reconstruction.
It’s worth noting, however, that Vital3D stands out for its commercial-first strategy, using veterinary applications to test and refine the technology before moving to human use. The company is targeting a €76.5 million addressable market for its pet wound patch across the EU and US, with plans to sell 100,000 units by 2028.
What Does This Mean For Your Business?
For UK opticians and eyewear providers, IXI’s autofocus glasses may introduce a new product category with implications for prescriptions, training and aftercare. As consumer expectations shift towards seamless, tech-enabled vision correction, businesses will, therefore, need to adapt quickly.
In healthcare, Vital3D’s approach may offer a model for phased innovation. For example, by starting with pets and progressing gradually towards human treatment, the company reduces clinical risk while building regulatory and commercial experience. This could be especially relevant for UK medtech startups navigating approval pathways for advanced therapeutic devices.
Both companies face significant hurdles around manufacturing, regulation and market adoption. However, their step-by-step focus on solving practical problems, rather than pushing hype, suggests a more sustainable route to impact. Whether in optometry or regenerative medicine, UK stakeholders may find that these technologies offer useful templates for how to bring complex ideas to market with real-world application in mind.
Tech News : Google Users Can Run AI Offline On Phones
oogle has (albeit quietly) released a new app that allows users to run powerful AI tools directly on their phones without needing a Wi-Fi or data connection.
Edge Gallery
The new app, called Google AI Edge Gallery, lets users download and run generative AI models locally on Android devices. These models can perform a wide range of tasks, e.g. from answering questions and summarising text to generating images or writing code, all without sending any data to the cloud (no connection needed).
Models Sourced From Hugging Face
The models are sourced from Hugging Face, a leading open AI model platform, and are processed entirely on the user’s device using Google’s LiteRT runtime and on-device ML tools. Users can switch between models, view real-time performance metrics, and even test their own models if they meet the compatibility requirements.
The app’s key functions include:
– AI Chat for multi-turn conversations.
– Prompt Lab for rewriting, summarising, or generating code.
– Ask Image for asking questions about photos.
– A model selection interface with performance benchmarks.
Why?
The move aligns with Google’s growing focus on edge computing, where tasks are carried out on local devices rather than in the cloud. This approach offers key benefits around speed, accessibility, and data privacy.
For example, by letting models run locally, users don’t have to rely on internet connections or send sensitive data to external servers. Google says the app is designed to support developers, tech-savvy users, and organisations that want reliable AI tools even in low-connectivity environments.
The release follows Google’s AI-heavy announcements at Google I/O 2025, where it unveiled new AI features across Android, Gemini, and its Pixel devices.
What Are the Benefits?
Running AI locally offers several practical and privacy-related advantages, such as:
– Offline functionality. Users can run models anywhere, without Wi-Fi or mobile data.
– Faster response times. On-device processing reduces delays caused by network latency.
– Improved privacy. Data stays on the device, which may reassure users handling sensitive information.
– Developer control. Developers can experiment with different models, observe performance, and build edge-native apps.
For example, a field engineer working in a remote area could use the app to summarise technical notes without needing a signal, while a journalist might analyse an image on their phone without uploading sensitive material to the cloud.
Who Can Use It, When, and How?
The AI Edge Gallery app is currently available to download on Android devices via GitHub. It is labelled as an experimental Alpha release, with an iOS version confirmed to be in development, although Google has not yet announced when or where it will be released.
It should be noted that the app is not available on the Play Store. Instead, users must download the app manually from GitHub. Installation requires sideloading the APK file, and Google has published a setup guide on the app’s Project Wiki.
The app is free to use under an Apache 2.0 open-source licence, which allows personal, educational, and commercial use without restriction.
However, it’s worth noting that performance could vary depending on the device’s hardware, and Google advises that newer phones with more RAM and faster processors are likely to be able to run larger models more effectively.
Showcasing Google’s Own Models
This release appears to signal a subtle but strategic shift in Google’s AI rollout strategy. For example, while the company has traditionally focused on cloud-based AI, this app shows it is now investing in local, device-first AI infrastructure.
It may also help Google showcase the performance of its own models, e.g. Gemma 3n, a lightweight model optimised for mobile, and reinforce its presence in the developer community by integrating tightly with Hugging Face and offering flexibility in model choice.
If successful, Google’s AI Edge Gallery could form the basis for deeper AI integration into Android itself, particularly as competitors also move towards local AI capabilities.
What’s In It for Business Users?
For UK business users, the app could prove useful in several scenarios. For example:
– On-site professionals, such as surveyors, logistics workers, or service engineers, could use it in low-connectivity areas to analyse documents, photos, or text.
– Small teams could use offline AI for copywriting, coding, or productivity tasks without incurring cloud service fees or risking data exposure.
– Privacy-conscious sectors such as legal, healthcare, and defence may appreciate the enhanced data control that on-device processing allows.
Although the current app is more developer-focused than enterprise-ready, it gives a strong preview of what local AI could bring to business tools in the near future.
How Does It Compare with Competitors?
The launch is likely to put a bit of pressure on rivals such as Apple, Meta, and OpenAI, all of which are working on or teasing local AI models.
Apple is expected to unveil its own on-device AI model support at WWDC 2025, while Meta recently previewed mobile-ready versions of its LLaMA models. However, most models from OpenAI (including GPT-4) still rely on cloud access, making Google’s offering stand out for now.
Hugging Face has also been expanding its mobile AI support and is likely to benefit from this integration, particularly among Android developers. By giving developers a user-friendly testing ground for their models, Google is most likely hoping to strengthen its own ecosystem while supporting the wider open AI community.
Limitations
Always with tech, despite its promise, the app has its limitations. For example, performance is highly device-dependent, and some models may run slowly (or fail entirely) on older hardware. For instance, image captioning models may take several seconds to process a request unless used on a high-end device.
Also, the user interface is functional but not consumer-ready, and installation via GitHub may deter less technical users. This is, therefore, clearly a tool for early adopters and developers rather than general smartphone users (at least for now).
There are also concerns around misuse. While offline AI increases privacy, it also makes it harder to monitor how models are being used. Without cloud oversight, some experts warn it could be harder to enforce content safety or ethical guidelines.
As one developer on GitHub noted: “It’s amazing tech—but what happens when powerful tools are completely disconnected from accountability mechanisms?” That question may become more pressing as local AI becomes more powerful and widely available.
Regulatory Implications Still Unclear
Because the models run locally, data protection laws such as the UK’s GDPR may not apply in the same way as with cloud-based AI. However, this could raise new questions around model bias, hallucination, and responsibility for outcomes when the tools are used offline.
No formal regulatory guidance has yet been issued in the UK or EU for edge AI use cases of this kind, though industry observers expect the issue to grow in importance as adoption increases.
What Does This Mean For Your Business?
If AI Edge Gallery gains traction beyond the developer community, it could mark the start of a broader move toward decentralised AI usage, giving users more autonomy over their data, tools and workflows. For UK businesses, the ability to run models offline opens up new possibilities for mobile productivity, secure client interactions, and operational resilience in connectivity-limited environments. From a practical standpoint, sectors such as construction, logistics, healthcare, and professional services could all find value in locally executed AI that reduces both costs and compliance risk.
For Google, the app serves multiple strategic purposes. For example, it allows the company to showcase its own AI models in real-world use, gather feedback from early adopters, and strengthen ties with the open-source community through its Hugging Face integration and permissive licensing. At the same time, it positions Google to lead in a space where rivals are only just beginning to move, thereby putting pressure on Apple, Meta and others to accelerate their local AI offerings.
However, running AI models offline complicates questions of oversight, safety and accountability. Without cloud-based controls, there is little to stop misuse, and no guarantee that outputs will meet any quality or ethical standard. For regulators and policymakers, this raises difficult issues around liability and governance that have yet to be addressed in formal legislation.
The wider AI market may also need to reckon with the fragmentation introduced by local deployment. Device specs, model compatibility, and uneven performance could all impact usability, potentially reinforcing digital divides. And while the app is free, it still assumes a baseline of technical knowledge that may put it out of reach for less experienced users.
AI Edge Gallery, therefore, essentially reflects a shift towards placing AI tools directly into the hands of users, no longer tethered to distant servers or platform-controlled APIs. For those in business, development, or digital infrastructure, that shift could prove both empowering and disruptive, depending on how the ecosystem evolves.
Company Check : Meta & Yandex Covert Tracking Concerns
Meta and Russian search firm Yandex used hidden background scripts to monitor Android users’ web activity without consent, bypassing incognito mode and browser protections, researchers say.
Hidden Tracking System Uncovered
A new joint investigation has revealed that Meta and Yandex have been covertly collecting the private web browsing data of Android users by exploiting local communication loopholes between mobile apps and browsers. The technique reportedly allowed both companies to bypass standard privacy protections, without the knowledge or consent of users.
The findings were published by an international research team led by Radboud University in the Netherlands and IMDEA Networks Institute in Spain. The group included privacy experts Gunes Acar, Narseo Vallina-Rodriguez, Tim Vlummens (KU Leuven), and others. Their research revealed that Android apps owned by Meta (including Facebook and Instagram) and Yandex (including Yandex Maps, Browser, Navi, and Search) were silently listening on fixed local ports to receive web tracking data via local network connections, thereby effectively joining app-based user identities with users’ browsing habits.
According to the researchers, this practice undermines the technical safeguards built into both Android and modern web browsers, including incognito browsing, cookie restrictions, and third-party tracking protections.
How the Tracking Worked in Practice
Under Android’s permission model, any app granted the “INTERNET” permission (which includes nearly all social media and mapping apps) can start a local server inside the app. Meta and Yandex are reported to have used this ability to set up background listeners on local ports (e.g. via TCP sockets or WebRTC channels).
When users visited websites embedded with Meta Pixel or Yandex Metrica tracking scripts, those scripts could secretly send data to these background ports on the same device. This meant the apps could intercept identifiers and browsing metadata from the websites, despite no direct interaction from the user, and tie them to a logged-in app profile. The researchers say this technique effectively broke down the wall between mobile app usage and private web browsing, two areas users generally expect to remain separate.
Evasion Tactics From Yandex?
While Meta’s version used WebRTC signalling to send identifiers to their native apps, it seems that Yandex implemented a more dynamic system. For example, their apps reportedly downloaded remote configurations and delayed activation for several days after installation, which is behaviour likened by the researchers to malware-like evasion tactics.
Widespread Reach and Long-Term Use
The researchers have reported that the tracking appears to have been extensive. Meta Pixel is currently embedded on approximately 5.8 million websites, while Yandex Metrica is used on more than 3 million. Although the practice was only observed on Android devices, the scale of exposure is, therefore, significant. The researchers report that Yandex has been doing this since at least 2017, while Meta began similar behaviour in late 2024.
Apparent Lack of Disclosure
What makes the findings more concerning is the apparent lack of disclosure to app users, website operators, or browser vendors. For example, developer forums have shown widespread confusion among website owners who were unaware their use of tracking pixels enabled data extraction via app-localhost bridges. Some people reported unexplained localhost calls from Meta’s scripts, with little guidance on what the data was or how it was being used.
Google and Browser Makers Respond
Google, which maintains the Android operating system, has confirmed the tracking method was being used in “unintended ways that blatantly violate our security and privacy principles.” Chrome developers, along with DuckDuckGo and other browser vendors, have now issued patches to block some forms of localhost communication initiated by websites.
Also, Narseo Vallina-Rodríguez, associate professor at IMDEA, noted: “Until our disclosure, Android users were entirely defeated against this tracking method. Most platform operators likely didn’t even consider this in their threat models.”
Countermeasures Rolled Out
As a result of the academic team’s findings, several browser-based countermeasures, such as port-blocking and new sandboxing approaches, are now being rolled out, and Chrome’s patch is reportedly going live imminently.
Meta and Yandex Defend Their Position
In response to the findings, Meta has said it paused the feature and was working with Google to clarify the “application of their policies.”
Yandex, meanwhile, has reportedly denied that any sensitive data was collected, saying that “The feature in question does not collect any sensitive information and is solely intended to improve personalisation within our apps.” However, the researchers argue that the data gathered, including persistent identifiers, browsing activity, and time-stamped behaviour, carries substantial profiling risk.
Privacy Experts Raise the Alarm
Not surprisingly, the episode has drawn some strong criticism from privacy advocates, who argue the tactics used represent a significant overreach and a breach of user trust. For example, the European Digital Rights (EDRi) group issued a statement calling it a “blatant abuse of technical permissions,” while Mozilla Fellow Alice Munyua said the practice “shows exactly why we need more transparency, not less, in how apps interact with user data.”
IMDEA’s Aniketh Girish, one of the study’s co-authors, said the real issue lies in how easily these companies linked users’ web identities to their mobile profiles without any consent or notification.
Implications
For businesses relying on Meta and Yandex advertising tools, the revelations raise fresh questions about the ethical and legal responsibilities of digital marketing. Many companies use Meta Pixel or Yandex Metrica to improve targeting and ad performance, but may now find themselves indirectly involved in opaque data practices.
Businesses Using These Tools Could Be Held Responsible
It seems that businesses using third-party tools like Meta Pixel or Yandex Metrica (e.g. operators and advertisers) aren’t absolved of responsibility if those tools are later found to breach privacy rules. This is because legal and regulatory frameworks such as the UK GDPR place obligations on data controllers to understand and account for how user data is collected and processed, even when using external vendors.
Also, business users and app developers who trust major platforms for analytics and performance tracking may now need to be more cautious.
What Does This Mean For Your Business?
The apparent scale and persistence of this tracking activity reveals more than just a privacy lapse. It shows how trusted platforms may have quietly prioritised data collection over user transparency, thereby exploiting overlooked technical loopholes. The fact that browser-level defences are only now being introduced suggests the issue went unnoticed even by major platform operators.
For UK businesses, the implications are serious. For example, many rely on tools like Meta Pixel or Yandex Metrica for advertising and analytics, but under GDPR, they remain responsible for understanding how data is collected, regardless of who built the tools. This means that if personal data was captured without consent via websites or apps operated in the UK, businesses could be held accountable.
The lack of disclosure to developers and site owners also raises questions about consent and control. If tracking was occurring via localhost connections without their knowledge, they had no way to inform users or adjust settings accordingly. As regulators increase their focus on accountability, ignorance of how embedded tools function is unlikely to offer much protection.
More broadly, this case highlights the need for reform across both mobile platforms and browsers. Researchers say that Android’s local port access requires stronger safeguards, and permission models need updating to prevent similar abuse. Whether that happens will depend on pressure from developers, watchdogs, and public institutions.
At its core, the episode shows how fragile digital trust can be when data is moved behind the scenes without consent. For users and UK businesses alike, the expectation now is not just performance, but clear accountability for how every click and interaction is tracked, stored, and shared.
Security Stop Press : HMRC Hit by £47m Phishing Scam Targeting Taxpayer Accounts
Criminals stole £47 million from HMRC last year by exploiting over 100,000 taxpayer accounts in a major phishing scam.
The fraudsters used stolen personal data to access or create Government Gateway accounts, then submitted fake tax rebate claims. HMRC says no individuals lost personal funds, as the money was claimed directly from its own systems.
“This was an attempt to claim money from HMRC, not from customers,” the authority said. Affected individuals are now being contacted, though many didn’t know they had an account in the first place.
The incident only came to light during a Treasury Select Committee hearing, prompting criticism from MPs. Arrests have been made following an international investigation.
HMRC insists its systems weren’t hacked but has pledged further investment in account security. It blocked £1.9 billion in similar fraud attempts last year.
To guard against similar attacks, businesses should focus on phishing awareness training, enable strong two-factor authentication, and regularly audit account activity for unauthorised access.