Company Check : Meta & Yandex Covert Tracking Concerns
Meta and Russian search firm Yandex used hidden background scripts to monitor Android users’ web activity without consent, bypassing incognito mode and browser protections, researchers say.
Hidden Tracking System Uncovered
A new joint investigation has revealed that Meta and Yandex have been covertly collecting the private web browsing data of Android users by exploiting local communication loopholes between mobile apps and browsers. The technique reportedly allowed both companies to bypass standard privacy protections, without the knowledge or consent of users.
The findings were published by an international research team led by Radboud University in the Netherlands and IMDEA Networks Institute in Spain. The group included privacy experts Gunes Acar, Narseo Vallina-Rodriguez, Tim Vlummens (KU Leuven), and others. Their research revealed that Android apps owned by Meta (including Facebook and Instagram) and Yandex (including Yandex Maps, Browser, Navi, and Search) were silently listening on fixed local ports to receive web tracking data via local network connections, thereby effectively joining app-based user identities with users’ browsing habits.
According to the researchers, this practice undermines the technical safeguards built into both Android and modern web browsers, including incognito browsing, cookie restrictions, and third-party tracking protections.
How the Tracking Worked in Practice
Under Android’s permission model, any app granted the “INTERNET” permission (which includes nearly all social media and mapping apps) can start a local server inside the app. Meta and Yandex are reported to have used this ability to set up background listeners on local ports (e.g. via TCP sockets or WebRTC channels).
When users visited websites embedded with Meta Pixel or Yandex Metrica tracking scripts, those scripts could secretly send data to these background ports on the same device. This meant the apps could intercept identifiers and browsing metadata from the websites, despite no direct interaction from the user, and tie them to a logged-in app profile. The researchers say this technique effectively broke down the wall between mobile app usage and private web browsing, two areas users generally expect to remain separate.
Evasion Tactics From Yandex?
While Meta’s version used WebRTC signalling to send identifiers to their native apps, it seems that Yandex implemented a more dynamic system. For example, their apps reportedly downloaded remote configurations and delayed activation for several days after installation, which is behaviour likened by the researchers to malware-like evasion tactics.
Widespread Reach and Long-Term Use
The researchers have reported that the tracking appears to have been extensive. Meta Pixel is currently embedded on approximately 5.8 million websites, while Yandex Metrica is used on more than 3 million. Although the practice was only observed on Android devices, the scale of exposure is, therefore, significant. The researchers report that Yandex has been doing this since at least 2017, while Meta began similar behaviour in late 2024.
Apparent Lack of Disclosure
What makes the findings more concerning is the apparent lack of disclosure to app users, website operators, or browser vendors. For example, developer forums have shown widespread confusion among website owners who were unaware their use of tracking pixels enabled data extraction via app-localhost bridges. Some people reported unexplained localhost calls from Meta’s scripts, with little guidance on what the data was or how it was being used.
Google and Browser Makers Respond
Google, which maintains the Android operating system, has confirmed the tracking method was being used in “unintended ways that blatantly violate our security and privacy principles.” Chrome developers, along with DuckDuckGo and other browser vendors, have now issued patches to block some forms of localhost communication initiated by websites.
Also, Narseo Vallina-Rodríguez, associate professor at IMDEA, noted: “Until our disclosure, Android users were entirely defeated against this tracking method. Most platform operators likely didn’t even consider this in their threat models.”
Countermeasures Rolled Out
As a result of the academic team’s findings, several browser-based countermeasures, such as port-blocking and new sandboxing approaches, are now being rolled out, and Chrome’s patch is reportedly going live imminently.
Meta and Yandex Defend Their Position
In response to the findings, Meta has said it paused the feature and was working with Google to clarify the “application of their policies.”
Yandex, meanwhile, has reportedly denied that any sensitive data was collected, saying that “The feature in question does not collect any sensitive information and is solely intended to improve personalisation within our apps.” However, the researchers argue that the data gathered, including persistent identifiers, browsing activity, and time-stamped behaviour, carries substantial profiling risk.
Privacy Experts Raise the Alarm
Not surprisingly, the episode has drawn some strong criticism from privacy advocates, who argue the tactics used represent a significant overreach and a breach of user trust. For example, the European Digital Rights (EDRi) group issued a statement calling it a “blatant abuse of technical permissions,” while Mozilla Fellow Alice Munyua said the practice “shows exactly why we need more transparency, not less, in how apps interact with user data.”
IMDEA’s Aniketh Girish, one of the study’s co-authors, said the real issue lies in how easily these companies linked users’ web identities to their mobile profiles without any consent or notification.
Implications
For businesses relying on Meta and Yandex advertising tools, the revelations raise fresh questions about the ethical and legal responsibilities of digital marketing. Many companies use Meta Pixel or Yandex Metrica to improve targeting and ad performance, but may now find themselves indirectly involved in opaque data practices.
Businesses Using These Tools Could Be Held Responsible
It seems that businesses using third-party tools like Meta Pixel or Yandex Metrica (e.g. operators and advertisers) aren’t absolved of responsibility if those tools are later found to breach privacy rules. This is because legal and regulatory frameworks such as the UK GDPR place obligations on data controllers to understand and account for how user data is collected and processed, even when using external vendors.
Also, business users and app developers who trust major platforms for analytics and performance tracking may now need to be more cautious.
What Does This Mean For Your Business?
The apparent scale and persistence of this tracking activity reveals more than just a privacy lapse. It shows how trusted platforms may have quietly prioritised data collection over user transparency, thereby exploiting overlooked technical loopholes. The fact that browser-level defences are only now being introduced suggests the issue went unnoticed even by major platform operators.
For UK businesses, the implications are serious. For example, many rely on tools like Meta Pixel or Yandex Metrica for advertising and analytics, but under GDPR, they remain responsible for understanding how data is collected, regardless of who built the tools. This means that if personal data was captured without consent via websites or apps operated in the UK, businesses could be held accountable.
The lack of disclosure to developers and site owners also raises questions about consent and control. If tracking was occurring via localhost connections without their knowledge, they had no way to inform users or adjust settings accordingly. As regulators increase their focus on accountability, ignorance of how embedded tools function is unlikely to offer much protection.
More broadly, this case highlights the need for reform across both mobile platforms and browsers. Researchers say that Android’s local port access requires stronger safeguards, and permission models need updating to prevent similar abuse. Whether that happens will depend on pressure from developers, watchdogs, and public institutions.
At its core, the episode shows how fragile digital trust can be when data is moved behind the scenes without consent. For users and UK businesses alike, the expectation now is not just performance, but clear accountability for how every click and interaction is tracked, stored, and shared.
Security Stop Press : HMRC Hit by £47m Phishing Scam Targeting Taxpayer Accounts
Criminals stole £47 million from HMRC last year by exploiting over 100,000 taxpayer accounts in a major phishing scam.
The fraudsters used stolen personal data to access or create Government Gateway accounts, then submitted fake tax rebate claims. HMRC says no individuals lost personal funds, as the money was claimed directly from its own systems.
“This was an attempt to claim money from HMRC, not from customers,” the authority said. Affected individuals are now being contacted, though many didn’t know they had an account in the first place.
The incident only came to light during a Treasury Select Committee hearing, prompting criticism from MPs. Arrests have been made following an international investigation.
HMRC insists its systems weren’t hacked but has pledged further investment in account security. It blocked £1.9 billion in similar fraud attempts last year.
To guard against similar attacks, businesses should focus on phishing awareness training, enable strong two-factor authentication, and regularly audit account activity for unauthorised access.
Sustainability-in-Tech : New ‘Meat’ From Fermented Fungi & Oats
Swedish startup Millow is using dry fermentation to create scalable, low-impact meat substitutes that could reshape the future of food production.
A New Approach to Protein Production
Millow, a foodtech company based in Gothenburg, has launched its first commercial-scale factory to produce a new kind of meat alternative. Its method combines oats and mycelium (the root-like fibres of fungi), using a patented dry fermentation technique to create a solid, sliceable protein block that can be turned into familiar foods such as burgers, meatballs and kebabs.
Addresses Two Core Challenges
Millow’s production model is designed to address two core challenges in the alternative protein sector, i.e. sustainability and scalability. For example, unlike many existing meat substitutes, which rely on liquid fermentation, imported ingredients or complex multi-stage processing, Millow’s product is created from just two inputs. The result is a minimally processed food that avoids the use of binders, flavourings or additives and can be produced at scale in just 24 hours.
New Factory
Millow’s 2,500 square metre facility was built in a repurposed LEGO factory and will eventually produce up to 500 kilograms of protein each day. The site also houses advanced fermentation labs to support future research and development.
What Makes Millow Different?
Although mycelium-based products are not entirely new, Millow’s approach appears to be significantly different from earlier efforts. Most notably, it avoids the need to extract protein strands or recombine them with synthetic binders, as is done in products like Quorn. Instead, the dry fermentation process grows a whole block of protein directly from the grain and fungus mixture.
The company also uses a proprietary texturing method, known as MUTE (Mycelium Utilised Texture Engineering), which gives the final product a structure similar to muscle tissue. This allows it to behave more like meat when cooked or handled, with a firmer texture and the ability to hold up in stews and other wet dishes.
Gluten Free and More
Millow says its product is fully plant-based, gluten free and contains no genetically modified ingredients. It also says the product is rich in nutrition, offering up to 27 grams of complete protein per 100 grams, along with fibre, vitamins and essential minerals.
A Response to Sector Shortcomings
Millow’s entry into the market comes at a time when the plant-based meat sector is facing growing criticism. Millow’s founders say their aim is to move the sector on from the shortcomings of first-generation plant-based meat, which often struggled with over-processing, long ingredient lists and inconsistent consumer appeal. By focusing on transparency, wholefood ingredients and production efficiency, the company appears to be trying to position its product as a more scalable and environmentally responsible alternative.
As the company puts it on its website: “Not all meat substitutes are actually better for the planet. Most alternatives are ultra-processed, which means they’ve gone through many different manufacturing stages right across the globe. And at every stage, a lot of energy and water are consumed. Millow is entirely different.”
Environmental Benefit
Millow is also aiming to produce food with a clearer environmental benefit. For example, a life cycle assessment of the product found it can cut greenhouse gas emissions by up to 97 percent compared to beef. Compared to soy-based substitutes, emissions are reduced by around 80 percent.
Water use is also significantly lower. For example, producing one kilogram of Millow requires only 3 to 4 litres of water. By contrast, producing a kilogram of beef can require over 15,000 litres, while soy protein typically uses more than 1,800 litres.
A Shift Toward Fermentation-Based Foods
While total investment in alternative proteins fell to 1.1 billion dollars globally in 2024, funding for fermentation startups rose by 43 percent, according to the Good Food Institute. This reflects a shift away from mimicry-focused meat alternatives towards more efficient, adaptable systems that can deliver clean-label products at scale.
Millow is, therefore, part of a growing European cluster exploring the potential of fungal fermentation. Hamburg-based Infinite Roots, for example, raised 58 million dollars in early 2025 to develop protein from brewery byproducts. Berlin’s Formo Foods has also attracted major investment to create cheese analogues using microbial fermentation.
These companies all seem to be sharing a focus on improving environmental outcomes through more efficient resource usage. By using locally sourced inputs, reducing energy consumption and avoiding long supply chains, they aim to provide credible alternatives to meat without the compromises seen in earlier products.
Business and Industry Implications
Millow’s model appears to offer a number of practical advantages for different stakeholders. For food brands and retailers, the simplicity of the ingredient list and clean processing could align with growing consumer demand for wholefood, high-protein alternatives that are not highly processed.
Foodservice providers may also benefit from the flexibility of the product, which can be barbecued, roasted, baked or fried without losing its structure. The mycelium-oat base also offers a neutral flavour profile that can be adapted to regional tastes.
For the wider food and farming sectors, the technology presents opportunities as well as challenges. The ability to swap grain types in the fermentation process opens the door for localised protein production using existing crops. This could reduce reliance on imported soy or pea protein, while creating new demand for Nordic oats or other regionally grown grains.
Not All Good News
However, it should be noted that there are, of course, some limitations. For example, Millow is currently only available in Sweden, and commercial rollout will depend on regulatory approval, consumer uptake and the ability to scale consistently. The company is working on distribution agreements and expects to launch products in retail and foodservice by the end of 2025.
Questions and Criticisms Remain
Although the sustainability credentials appear to be strong, some experts have urged caution. It seems that mycelium-based foods remain unfamiliar to many consumers, and overcoming cultural and psychological barriers may take time. Also, there are broader questions about production scale, cost competitiveness and long-term safety assessments, particularly in new markets.
There is also the issue of transparency. While Millow claims to be the most sustainable meat alternative currently available, much of the supporting data comes from internal research. Independent validation will be important if the company wants to win the confidence of regulators and buyers in new territories.
That said, Millow represents a departure from the first wave of alternative proteins. By using biotechnology and fungal fermentation to reduce complexity, cost and environmental impact, the company is helping to set a new direction for the sector. Whether others will follow remains to be seen.
What Does This Mean For Your Organisation?
What Millow’s approach demonstrates is that alternative protein production is entering a new phase, one shaped more by operational efficiency and wholefood principles than by novelty or marketing claims. Rather than mimicking meat through increasingly complex formulations, this new category focuses on simplicity, transparency and functional performance. For investors and researchers, this signals a change in the priorities of the sector. For food producers, it could offer a chance to streamline supply chains and reduce reliance on global commodity crops.
For UK businesses, particularly those in retail, foodservice and manufacturing, the emergence of scalable dry fermentation methods presents both opportunity and disruption. For example, if adapted successfully to local grain inputs and regional production models, similar technology could help strengthen domestic protein resilience while supporting decarbonisation goals. It could also create new partnerships between biotech firms and UK arable growers, with potential to reinvigorate oat and cereal markets in a lower-emissions food system. For buyers and procurement teams, the appeal is likely to lie in a product that promises nutrition, versatility and sustainability without the drawbacks associated with ultra-processing or long ingredient lists.
However, it seems that the model will need to prove itself beyond the Swedish market. Regulatory navigation, consumer education and price competitiveness will all play a role in determining its commercial viability. Various stakeholders, including environmental groups, health regulators and farming unions, will most likely be watching closely to see whether these claims of efficiency and low impact can be consistently validated at scale. As more companies experiment with mycelium and dry fermentation, a clearer picture will emerge of how these innovations fit into the wider protein economy. Millow is not the only player, but it offers a compelling case study in how targeted science and regional focus can create new routes to sustainable food production.
Video Update : Another Massive Upgrade To CoPilot – Already !
CoPilot’s brand-new “Researcher Agent” is a pretty major upgrade, so this week’s Video-of-the-Week takes it to task and has a look at what it can do for your business.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – Use Outlook’s “Report” Button to Flag Suspicious Emails
Spot something ‘phishy’ in your inbox? Outlook’s built-in “Report” tool lets you quickly flag dodgy messages, and helps Microsoft improve detection.
How to:
– In the Outlook desktop app or web version, click on the email in your inbox to preview it in the Reading Pane — no need to open it fully.
– Click the Report button in the toolbar (sometimes labelled Junk or Phishing).
– Choose Phishing or Junk, depending on the content.
– The email will be flagged and moved out of your inbox.
Pro-Tip: Reporting dodgy messages helps train Microsoft’s filters and protects others in your organisation too.
Featured Article : Grok Blocked! Quarter Of EU Firms Ban Access
New research shows that one in four European organisations have banned Elon Musk’s Grok AI chatbot due to concerns over misinformation, data privacy and reputational risk, making it far more widely rejected than rival tools like ChatGPT or Gemini.
A Trust Gap Is Emerging in the AI Race
The findings from cybersecurity firm Netskope point to a growing shift in how European businesses are evaluating generative AI tools. While platforms like ChatGPT and Gemini continue to gain traction, Grok’s higher rate of rejection suggests that organisations are becoming more selective and are prioritising transparency, reliability and alignment with company values over novelty or brand recognition.
What Is Grok?
Grok is a generative AI chatbot developed by Elon Musk’s company xAI and built into X, the social media platform formerly known as Twitter. Marketed as a bold, “truth-seeking” alternative to mainstream AI tools, Grok is designed to answer user prompts in real time with internet-connected responses. However, a series of controversial and misleading outputs (along with a lack of transparency about how it handles user data and trains its model) have made many organisations wary of its use.
Grok’s Risk Profile Raises Red Flags
While most generative AI tools are being rapidly adopted in European workplaces, Grok appears to be the exception. For example, Netskope’s latest threat report reveals that 25 per cent of European organisations have now blocked the app at network level. In contrast, only 9.8 per cent have blocked OpenAI’s ChatGPT, and just 9.2 per cent have done the same with Google Gemini.
Content Moderation Issue
Part of the issue appears to lie in Grok’s content moderation, or lack thereof. For example, the chatbot has made headlines for spreading inflammatory and false claims, including the promotion of a “white genocide” conspiracy theory in South Africa and casting doubt on key facts about the Holocaust. These incidents appear to have deeply shaken confidence in the platform’s ethical safeguards and prompted scrutiny around how the model handles prompts, training data and user inputs.
Companies More Selective About AI Tools
Gianpietro Cutolo, a cloud threat researcher at Netskope, said the bans on Grok highlight a growing awareness of the risks linked to generative AI. As he explained, organisations are starting to draw clearer lines between different platforms based on how they handle security and compliance. “They’re becoming more savvy that not all AI is equal when it comes to data security,” he said, noting that concerns around reputation, regulation and data protection are now shaping AI adoption decisions.
Privacy and Transparency
Neil Thacker, Netskope’s Global Privacy and Data Protection Officer, believes the trend is indicative of a broader shift in how European firms assess digital tools. “Businesses are becoming aware that not all apps are the same in the way they handle data privacy, ownership of data that is shared with the app, or in how much detail they reveal about the way they train the model with any data that is shared within prompts,” he said.
This appears to be particularly relevant in Europe, where GDPR sets strict requirements on how personal and sensitive data can be used. Grok’s relative lack of clarity over what it does with user input, especially in enterprise contexts, appears to have tipped the scales for many firms.
It also doesn’t help that Grok is closely tied to X, a platform currently under EU investigation for failing to tackle disinformation under the Digital Services Act. The crossover has raised uncomfortable questions about how data might be shared or leveraged across Musk’s various companies.
Not The Only One Blocked
Despite its controversial reputation, it seems that Grok is far from alone in being blocked. The most blacklisted generative AI app in Europe is Stable Diffusion, an image generator from UK-based Stability AI, which is blocked by 41 per cent of organisations due to privacy and licensing concerns.
However, Grok’s fall from grace stands out because of how stark the contrast is with its peers. ChatGPT, for instance, remains by far the most widely used generative AI chatbot in Europe. Netskope’s report found that 91 per cent of European firms now use some form of cloud-based GenAI tool in their operations, suggesting that the appetite for AI is strong, but users are choosing carefully.
The relative trust in OpenAI and Google reflects the degree to which those platforms have invested in transparency, compliance documentation, and enterprise safeguards. Features such as business-specific data privacy settings, clearer disclosures on training practices, and regulated API access have helped cement their position as ‘safe bets’ in regulated industries.
Musk’s Reputation
There’s also a reputational issue at play, i.e. Elon Musk has become a polarising figure in both tech and politics, particularly in Europe. For example, Tesla’s EU sales dropped by more than 50 per cent year-on-year last month, with some industry analysts attributing the decline to Musk’s increasingly vocal support of far-right politicians and his role in the Trump administration.
It seems that the backlash may now be spilling over into his other ventures. Grok’s public branding as an unfiltered “truth-seeking” AI has been praised by some users, but in a European context, it risks triggering compliance concerns around hate speech, misinformation, and AI safety.
‘DOGE’ Link
Also, a recent Reuters investigation found that Grok is being quietly promoted within the US federal government through Musk’s (somewhat unpopular) Department of Government Efficiency (DOGE), thereby raising concerns over potential conflicts of interest and handling of sensitive data.
What Are Businesses Doing Instead?
With Grok now off-limits in one in four European organisations, it appears that most companies are leaning into AI platforms with clearer data control options and dedicated enterprise tools. For example, ChatGPT Enterprise and Microsoft’s Copilot (powered by OpenAI’s models) are increasingly popular among large firms for their security features, audit trails, and compatibility with existing workplace platforms like Microsoft 365.
Meanwhile, companies with highly sensitive data are now exploring private GenAI solutions, such as running open-source models like Llama or Mistral on internal infrastructure, or through secured cloud environments provided by AWS, Azure or Google Cloud.
Others are looking at AI governance platforms to sit between employees and GenAI tools, offering monitoring, usage tracking and guardrails. Tools like DataRobot, Writer, or even Salesforce’s Einstein Copilot are positioning themselves not just as generative AI providers, but as risk-managed AI partners.
At the same time, it shows how quickly sentiment can shift. Musk’s original pitch for Grok as an edgy, tell-it-like-it-is alternative to Silicon Valley’s AI offerings found some traction among individual users. But in a business setting, particularly in Europe, compliance, reliability, and reputational alignment seem to matter more than iconoclasm.
Regulation Reshaping the Playing Field
The surge in bans against Grok also reflects a change in how generative AI is being governed and evaluated at the institutional level. Across Europe, regulators are moving to tighten rules on artificial intelligence, with the EU’s landmark AI Act expected to set a global precedent. This new framework categorises AI systems by risk level and could impose strict obligations on tools used in high-stakes environments like recruitment, finance, and public services.
That means tools like Grok, which are perceived to lack sufficient transparency or safety mechanisms, could face even greater scrutiny in the future. European firms are clearly starting to anticipate these regulatory pressures, and adjusting their AI strategies accordingly.
Grok’s Market Position May Be Out of Step
At the same time, the pattern of bans has implications for the competitive dynamics of the GenAI sector. For example, while OpenAI, Google and Microsoft have invested heavily in enterprise-ready versions of their chatbots, with controls for data retention, content filtering and auditability, Grok appears less geared towards business use. Its integration into a consumer social media platform and emphasis on uncensored responses make it an outlier in an increasingly risk-aware market.
Security and Deployment Strategies Are Evolving
There’s also a growing role for cloud providers and IT security teams in shaping how AI tools are deployed across organisations. Many companies are now turning to secure gateways, policy enforcement tools, or in some cases, completely air-gapped deployments of open-source models to ensure data stays within strict compliance boundaries. These developments suggest the AI market is maturing quickly, with an emphasis not only on innovation, but on operational control.
What Does This Mean For Your Businesses?
For UK businesses, the growing rejection of Grok highlights the importance of due diligence when selecting generative AI tools. With data privacy laws such as the UK GDPR still closely aligned with EU regulations, similar concerns around transparency, content reliability and compliance are just as relevant domestically. Organisations operating across borders, particularly those in regulated sectors like finance, healthcare or legal services, are likely to favour tools that not only perform well but also come with clear safeguards, documentation and support for enterprise-grade governance.
More broadly, the story of Grok is a reminder that in today’s AI landscape, branding and ambition are no longer enough. The success of generative AI tools increasingly depends on trust, i.e. trust in how data is handled, how outputs are generated, and how tools behave under pressure. For developers and vendors, that means security, transparency and adaptability must be built into the product from day one. For businesses, it means asking tougher questions before deploying any new tool into day-to-day operations.
While Elon Musk’s approach may continue to resonate with individual users who value unfiltered output or alignment with particular ideologies, enterprise buyers are clearly playing by a different rulebook. They’re looking for stability, accountability and risk management, not provocation. As regulation tightens, that divide is likely to widen.