Company Check : Google Accused of Political Filtering in Gmail
Gmail’s spam filters have come under fresh scrutiny after US FTC Chairman Andrew Ferguson accused Google of suppressing Republican fundraising emails while letting similar Democratic messages through.
Direct Accusation from the FTC
In a letter dated 28 August 2025, Ferguson wrote directly to Alphabet CEO Sundar Pichai, alleging that Gmail’s filtering system may be violating US consumer protection law by unfairly targeting one side of the political spectrum.
“My understanding from recent reporting is that Gmail’s spam filters routinely block messages from reaching consumers when those messages come from Republican senders but fail to block similar messages sent by Democrats,” Ferguson stated in the letter, published on the FTC’s website.
He cited a New York Post report that found identical fundraising emails, differing only by party, had been treated unequally by Gmail’s filtering system. The letter suggests such behaviour could breach Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices, particularly those that harm consumers’ ability to make choices or receive important information.
Ferguson warned that Alphabet’s “alleged partisan treatment of comparable messages or messengers in Gmail to achieve political objectives may violate” the law, and said an FTC investigation and enforcement action may follow.
Political Context Behind the Complaint
It’s worth noting at this point that Ferguson, a former solicitor general of Virginia, was appointed as FTC Chairman by Donald Trump in January 2025 following the President’s return to office. He replaced Lina Khan, a vocal critic of Big Tech, and has made no secret of his intention to target what he sees as political bias by dominant technology platforms.
In December 2024, Trump described Ferguson as “the most America First, and pro-innovation FTC Chair in our Country’s History,” adding that Ferguson had “a proven record of standing up to Big Tech censorship.” Ferguson himself has argued that if platforms work together to suppress conservative views, they may be guilty of antitrust violations.
This political backdrop has led some observers to question whether the accusations against Google are being made entirely in good faith, or as part of a broader effort to align the FTC’s enforcement agenda with Republican political objectives.
Google Says “Filters Apply Equally”
Google has responded by rejecting the accusation that its spam filters discriminate based on political ideology.
In a statement, spokesperson José Castañeda said: “Email filter protections are in place to keep our users safe, and they apply equally to all senders, regardless of political affiliation.”
Google has long maintained that its filtering decisions are driven by user feedback, email engagement metrics (such as open and click rates), and security concerns, not by any partisan motive. In fact, in 2022, the company launched a pilot programme allowing political campaigns to apply for exemption from spam filtering, after similar accusations were raised during the US midterms.
Despite this, the Republican National Committee (RNC) sued Google in October 2022, claiming emails from Republican groups were being systematically filtered to spam during key fundraising periods. That lawsuit was dismissed in 2023 due to lack of evidence, although it has since been revived.
What Is Gmail’s Filtering Actually Doing?
While some critics argue Gmail suppresses conservative messages, academic research on the topic is inconclusive. A 2022 study from North Carolina State University found that Gmail filtered more right-leaning emails to spam than left-leaning ones, while Yahoo and Outlook tended to do the opposite. However, the researchers also noted that much of Gmail’s filtering was based on user behaviour and sender reputation, not politics.
Google pointed out at the time that Gmail users have full control over spam settings, and that users can mark any email as “not spam” to prevent future filtering.
That said, the subject remains politically sensitive. Fundraising emails are a key revenue stream for US political campaigns, and if filters prevent delivery, they can materially impact donations and voter engagement.
Ferguson’s letter argues: “Consumers expect that they will have the opportunity to hear from their own chosen candidates or political party. A consumer’s right to hear from candidates or parties, including solicitations for donations, is not diminished because that consumer’s political preferences may run counter to your company’s or your employees’ political preferences.”
Could Google Face Penalties or Restrictions?
If the FTC finds that Google has violated the FTC Act, the company could face enforcement action, including fines or mandated changes to Gmail’s filtering systems. However, such action would require a formal investigation and proof that any bias is systematic and not attributable to legitimate filtering criteria.
It’s also unclear how such an investigation would reconcile users’ rights to avoid spam with senders’ rights to reach inboxes. Ferguson’s interpretation of consumer harm appears to rest on the assumption that missed political emails constitute a denial of free speech or access to political discourse, which is something Google is likely to contest.
Google does not publicly disclose the exact algorithms or rule sets behind its spam detection system, citing security and abuse prevention concerns. Any forced transparency could have knock-on effects for email security and user privacy.
What This Means for Businesses and Email Platforms
This case raises broader questions for email platforms, regulators, and business senders, particularly in the UK, where GDPR and PECR (Privacy and Electronic Communications Regulations) place strict limits on unsolicited marketing.
If the US FTC sets a precedent that political fundraising emails cannot be filtered as spam without triggering regulatory scrutiny, it may embolden other organisations, including businesses, to claim similar protections. This could undermine the effectiveness of spam filters, frustrate end-users, and expose platforms to further regulatory pressure.
For UK businesses, this case highlights the fine balance between sender rights and consumer protection. Email campaigns must navigate complex consent rules and content standards, while email service providers must demonstrate that their filtering practices are fair, consistent, and user-driven.
Key Challenges and Questions Ahead
Ferguson’s letter exposes the change in regulatory posture toward Big Tech under Trump’s second term. However, legal and technical barriers remain. For example, successfully proving partisan intent behind essentially secret algorithmic filtering is notoriously difficult, especially when the same tools are used to combat phishing, scams, and malware.
Also, while Ferguson’s language is strong, i.e. warning that “Alphabet may be engaging in unfair or deceptive acts or practices”, it is not yet clear whether a full-scale investigation is underway or likely to be.
What Does This Mean For Your Business?
The deeper challenge now facing Google is how to respond without weakening the very protections that users expect from email filtering. If Gmail adjusts its filters in response to political pressure, it risks opening the door to wider claims of bias from other interest groups, including corporate marketers and advocacy organisations. This could reduce user trust in the platform’s ability to safeguard inboxes from unwanted or harmful content. At the same time, refusing to alter its approach may invite further regulatory scrutiny from a politically motivated FTC, especially given Ferguson’s stated aim of tackling what he sees as anti-conservative censorship by tech platforms.
For regulators, the situation is no less complex. Ferguson’s framing of email filtering as a potential violation of the FTC Act relies on defining political emails as essential consumer content. That may be a difficult case to make without clearer evidence of intent or unequal treatment that goes beyond what automated systems already do in response to user signals. Yet the fact that this issue has been raised so directly at such a senior level suggests it is unlikely to fade quickly.
For UK businesses, the implications are more practical than political. Any moves in the US to curb the ability of platforms to filter unsolicited messages could have downstream effects on email service standards, especially for multinational tech providers like Google. If filtering rules are softened or become more contested, businesses may see higher volumes of low-quality or irrelevant messages reaching customers, increasing the risk of consumer disengagement or even regulatory backlash under UK and EU privacy laws. It may also complicate how marketing platforms classify and process outbound email campaigns.
Google finds itself once again in the position of defending complex algorithmic processes against public accusations that are simple to make but hard to refute. Ferguson, meanwhile, has positioned the FTC as a key actor in the battle over perceived ideological bias online, bringing renewed pressure to bear on how tech firms balance neutrality, safety, and control.
For businesses and users alike, the way this unfolds could influence not just inbox filters, but broader expectations of platform fairness and responsibility.
Security Stop-Press: AI Chatbots Are Linking Users to Scam Sites
Chatbots powered by large language models (LLMs) are giving out fake or incorrect login URLs, exposing users to phishing risks, according to research from cybersecurity firm Netcraft.
In tests using GPT-4.1 models, including Perplexity and Microsoft Copilot, only 66 per cent of login links provided were correct. The rest pointed to inactive, unrelated, or unclaimed domains that scammers could exploit. In one case, Perplexity recommended a phishing site posing as Wells Fargo’s login page.
Smaller brands were more likely to be misrepresented, as they appear less in AI training data. Netcraft also found over 17,000 AI-generated phishing pages already targeting users.
To stay safe, businesses should avoid relying on AI for login links, train staff to recognise phishing attempts, and push for stronger safeguards from AI providers.
Sustainability-In-Tech : UCLA Polymer Device Cools Without Fans or Refrigerants
A small, flexible cooling device developed by UCLA scientists can continuously reduce surrounding temperatures by up to 16°F, offering a sustainable alternative to traditional air conditioning.
A Compact, Solid-State Breakthrough in Cooling
Researchers at UCLA have unveiled a new cooling technology that operates without refrigerants, fans or compressors. Instead, it uses layers of flexible polymer films that expand and contract in response to an electric field, thereby actively removing heat. The tiny device, just under an inch wide and a quarter of an inch thick, offers a lightweight, energy-efficient alternative to conventional systems, and has already demonstrated the ability to lower ambient temperatures by nearly 9°C (16°F) continuously in lab tests.
Uses The ‘Electrocaloric Effect’
The prototype uses the electrocaloric effect, which is a property found in certain materials that causes them to change temperature when exposed to an electric field. However, this project has gone further than earlier experiments by pairing this effect with electrostrictive motion, i.e. the polymer also physically moves when charged, allowing the researchers to create a dynamic pumping action that shifts heat away from the source.
Designed With Wearables and Portables in Mind
The lead developer, Professor Qibing Pei of the UCLA Samueli School of Engineering, described the innovation as “a self-regenerative heat pump” and believes it could be ideal for wearable cooling systems. “Coping with heat is becoming a critical health issue,” he said, citing the growing dangers of heat stress in both industrial and consumer contexts. “We need multiple strategies to address it.”
The UCLA team sees wide potential for the design in personal cooling accessories, flexible electronics, and mobile systems used in hot environments. The films are flexible, lightweight, and made without liquid coolant or moving parts, which means they could be incorporated into garments, safety gear, or on-the-go electronic equipment where heat management is essential.
For example, warehouse and outdoor logistics workers in hot climates could benefit from clothing-integrated cooling components. Also, remote field technicians or engineers working on battery-heavy devices in poorly ventilated spaces could also deploy portable cooling pads to protect both personnel and electronics.
A Re-think of How Cooling Systems Are Built
Traditional cooling systems rely on vapour compression, a process that typically uses refrigerants such as hydrofluorocarbons (HFCs). These are powerful greenhouse gases, and while the Kigali Amendment and other measures have helped phase them down, their use remains widespread. Vapour-compression cooling is also relatively mechanically complex, energy-intensive, and bulky.
By contrast, UCLA’s design eliminates the need for refrigerants entirely. Each layer in the stack is coated with carbon nanotubes and acts both as a charge carrier and a heat exchanger. As an electric field is applied, alternating pairs of layers compress and expand in sequence, creating a kind of mechanical ‘accordion’ that actively moves heat from the source through the material and out into the environment.
Hanxiang Wu, one of the paper’s co-lead authors and a postdoctoral scholar in Pei’s lab, explained that the device’s core advantage is its simplicity. “The polymer films use a circuit to shuttle charges between pairs of stacked layers,” he said. “This makes the flexible cooling device more efficient than air conditioners and removes the need for bulky heat sinks or refrigerants.”
Sustainability Advantages for the Built Environment
For commercial and industrial sectors, the implications of this development could be significant. While the current model is small-scale, the underlying principle could enable more energy-efficient climate control in buildings and vehicles if adapted into broader system designs.
For example, smaller commercial premises, off-grid cabins, or remote infrastructure hubs could use scaled-up polymer-based systems to passively remove heat without heavy energy use. Similarly, businesses looking to reduce their cooling-related carbon footprint could integrate such systems into server racks, battery storage units, or sensitive workspaces where localised heat management is critical.
Unlike passive radiative cooling materials, which typically require exposure to the open sky and only work under certain conditions, this system functions independently of ambient humidity, weather, or sunlight. Its electricity-only operation means that when powered by renewables, the cooling process can be entirely emissions-free.
Markets and Use Cases with the Most to Gain
While mainstream residential HVAC systems are unlikely to be replaced overnight, sectors requiring portable, distributed, or wearable cooling solutions may see faster uptake. This includes defence, first responders, sports performance, outdoor event staffing, and high-temperature industrial roles such as glass or steel manufacturing.
The research team has already filed a patent and is exploring future product development. Pei confirmed the device could also be adapted to cool flexible electronics and embedded sensors. In particular, industries working on wearable tech, soft robotics, and thermal regulation in electric vehicles may find these materials offer a compact and scalable solution.
The innovation also opens the door to new kinds of thermal design for electronics. For example, temperature-sensitive components such as lithium batteries, processors, or optical sensors could benefit from localised solid-state cooling that does not compromise device flexibility or mobility.
Still in the Early Stages
Despite the promise, this technology is still in its early stages and, as with many materials science innovations, scaling up from lab to market presents challenges. Currently, the temperature drop of 8.8°C below ambient was achieved under carefully controlled test conditions and for small surface areas.
However, maintaining this level of performance over larger spaces, longer durations, or in real-world outdoor environments will require further development, particularly around durability, power consumption, and integration with fabrics or casings.
Another limitation is cost. While the polymers and carbon nanotubes used are relatively accessible, mass-manufacturing precision-layered ferroelectric film stacks could prove complex and expensive without production breakthroughs. Reliability under repeated use and extreme conditions is another consideration, especially for use in wearables or industrial settings.
Energy consumption is also an issue that really matters. For example, while the device itself uses low-voltage electricity, constant operation across large areas would still draw power, meaning the overall carbon footprint depends on the source of that electricity.
Concerns have also been raised in the wider field about the longevity of electrocaloric materials under stress. For example, ferroelectric polymers can degrade over time, especially under high cycling rates, and the cumulative effects of charge and discharge cycling on mechanical integrity are not yet fully known.
What Does This Mean For Your Organisation?
For now, the most immediate value for this innovation appears to lie in small-scale, high-impact use cases. Businesses operating in hot environments, whether in logistics, manufacturing, or field services, may be among the first to benefit from wearable or portable versions of this cooling technology. If the materials can be manufactured at scale and integrated into clothing or equipment affordably, it could improve productivity, reduce health risks, and lower demand for energy-hungry air conditioning. UK companies involved in the design of smart workwear, industrial safety gear, or modular electronics may also find opportunities in applying or adapting this technology into their own products.
Beyond wearables, the principle behind this cooling system offers a fresh approach to thermal management that could influence future designs in everything from data centres to electric vehicles. For UK firms in clean tech, energy-efficient infrastructure, or defence systems, this could represent a new avenue for collaboration or licensing. It also sits comfortably alongside national net zero goals, particularly in cutting energy consumption and phasing out refrigerant-based systems. However, progress will depend on whether UCLA’s lab success can translate into real-world resilience, cost efficiency, and ease of integration.
The wider lesson is that cooling does not have to mean compressors, gas, or fans. By embedding thermal functionality directly into the material structure, this research challenges long-held assumptions and opens up routes to smarter, lighter, and greener alternatives. For now, the technology is experimental and best seen as part of a wider portfolio of next-generation cooling methods. However, as climate challenges grow and energy costs rise, pressure is mounting on both researchers and businesses to bring practical alternatives like this to market sooner rather than later.
Video Update : Using Different Personalities in ChatGPT
ChatGPT offers four distinct pre-made ‘personalities’, namely : cynic, robot, nerd, and listener. You can ask for your content to be output through any (or all) of these different personality-types, thereby giving you the ability to get different responses, according to each personality type. Depending on your audiences and/or research, getting different perspectives could be very useful indeed.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – How To Remove Document Metadata for Security
Use Word’s Document Inspector to remove potentially sensitive metadata, such as author names and comments, to protect your information when sharing documents.
– Go to File > Info > Check for Issues > Inspect Document.
– Select the types of metadata to inspect and remove.
– Run the inspection and review the results.
– Remove the metadata as needed.
This helps ensure your document doesn’t inadvertently expose sensitive information.
Featured Article : 300,000 + Grok Chats Exposed Online
Hundreds of thousands of conversations with Elon Musk’s Grok chatbot have been discovered in Google Search results, while China has released DeepSeek V3.1 as a direct rival to GPT-5, together raising urgent questions about privacy, competition and security in the AI market.
How the Grok Leak Happened
The exposure of Grok transcripts was first reported by Forbes, which identified more than 370,000 indexed conversations, and later confirmed by the BBC, which counted nearly 300,000 visible through Google Search. The numbers differ slightly depending on how the search engine indexes the material, but both point to a vast volume of data becoming public without users’ awareness.
It appears that the cause lies in how Grok’s “share” feature works. For example, each time a user chose to share a chat, Grok generated a unique webpage containing the transcript. Since those pages were not blocked from being crawled, search engines such as Google indexed them automatically. What many users may have assumed would be a private or semi-private link was, in fact, publicly available to anyone searching the web.
What Was Revealed?
Reports indicate that the published material varied widely in content. Some transcripts showed harmless exchanges, such as meal plans or password suggestions. Others contained much more sensitive prompts, including questions about medical issues, details about mental health, and even confidential business information.
More troubling still, some indexed conversations reportedly included Grok’s responses to attempts at “red teaming” the system, essentially testing its limits. These produced instructions for making illicit drugs, coding malware and building explosives. In at least one case, the chatbot provided a detailed description of an assassination plot.
This mixture of personal data, sensitive queries and dangerous instructions underlines the scale of the problem. Once indexed, such material can be copied, cached and shared indefinitely, making it difficult (if not impossible) to remove entirely.
Risks for Users and Businesses
For individual users, the risk is obvious. Even if names and account details are obscured, prompts often contain information that could identify a person, their health status, or their location. Privacy experts also warn that conversations about relationships, finances or mental wellbeing could resurface years later.
For businesses, the implications may be more severe still. Many companies now experiment with AI tools for drafting documents, brainstorming ideas or even testing security scenarios. If such exchanges end up publicly indexed, they could inadvertently reveal trade secrets, security weaknesses or sensitive commercial plans. For regulated sectors like healthcare and finance, this creates potential compliance issues.
A Setback for Musk’s AI Venture
For xAI, the start-up behind Grok, this discovery is particularly awkward. Grok has been marketed as a distinctive alternative to established players like OpenAI’s ChatGPT or Google’s Gemini, with direct integration into Musk’s social platform X. However, the exposure of hundreds of thousands of conversations clearly undermines that positioning, fuelling questions over whether xAI has adequate safeguards in place.
The incident is also notable because Musk had previously criticised rivals over similar missteps. Earlier this year, OpenAI briefly allowed shared ChatGPT conversations to be indexed before reversing course after user complaints. At the time, Musk mocked the issue and celebrated Grok as safer. The latest revelations make that claim harder to sustain.
Not the First Time
This is not the first case of chatbot transcripts spreading more widely than users expected. OpenAI’s trial of a shareable ChatGPT link caused uproar earlier in the year, and Meta’s AI tool has faced similar criticism for publishing shared chats in a public feed.
What sets the Grok case apart, however, is the apparent scale and duration. The indexing appears to have been ongoing for months, creating a large reservoir of material in search engines. The mix of personal information with instructions for harmful activity adds another layer of controversy.
China’s DeepSeek Raises the Stakes
Just as Grok’s making the news for the wrong reasons, China’s AI sector has added a new dimension to the debate with the release of DeepSeek V3.1, an open-weight model that experts say matches GPT-5 on some benchmarks while being priced to undercut it. The model was quietly launched via WeChat and posted on the Hugging Face platform, and has been optimised to perform well on Chinese-made chips. This reflects Beijing’s determination to build advanced AI systems without relying on Western hardware, particularly Nvidia GPUs, which are increasingly subject to U.S. export controls.
Technically, DeepSeek V3.1 is striking because of its architecture. With 685 billion parameters, it sits at the level of many so-called “frontier” models. However, its mixture-of-experts design means only a fraction of the model activates when answering a query, cutting costs and energy use while combining fast recall with step-by-step reasoning in a single system. This hybrid approach is something only the very top commercial models have offered until now.
The release clearly has some significant competitive implications. OpenAI chief executive Sam Altman admitted that Chinese open-source models such as DeepSeek influenced his company’s decision to publish its own open-weight models this summer. If developers can access a powerful model for a fraction of the cost, the balance of adoption may tilt quickly, especially outside the United States.
Security and Governance Concerns Around DeepSeek
While DeepSeek’s technical capabilities are impressive, it seems that some serious security concerns remain. For example, Cisco researchers previously flagged critical flaws in the DeepSeek R1 model that made it vulnerable to prompt attacks and misuse, with tests showing a 100 per cent success rate in bypassing safeguards. Researchers have also observed the model internally reasoning through restricted queries but censoring its outputs, raising questions about hidden biases and control.
For businesses, the central issue here is data governance. UK security experts warn that using DeepSeek for workplace tasks is effectively the same as sending confidential information directly into mainland China, where it may be subject to state access and outside the reach of UK and EU data protection laws. Surveys of UK security leaders show widespread concern, with six in ten believing tools like DeepSeek will increase cyber attacks on their organisations, and many calling for government guidance or outright restrictions.
It should be noted that some countries have already acted. For example, South Korea has suspended new downloads of DeepSeek over privacy compliance, while Germany and Australia have imposed limits on its use in government and critical sectors. In the U.S., senators have urged a formal investigation into Chinese open models, citing risks around data security and intellectual property.
What This Means for the AI Market
The Grok exposure shows how a simple design oversight can turn into a mass privacy failure, while DeepSeek’s release highlights how quickly competition can shift when cost and accessibility align with national industrial strategy. Together they underscore a market where the pace of innovation is outstripping safeguards, leaving both users and regulators struggling to keep up.
Expert and Privacy Group Reactions
Researchers and privacy advocates have warned that the Grok incident highlights a growing structural risk. Dr Luc Rocher of the Oxford Internet Institute has called AI chatbots a “privacy disaster in progress,” noting that sensitive human information is being exposed in ways that current regulation has not kept pace with.
Carissa Véliz, an ethicist at Oxford University, has similarly argued that technologies which fail to clearly inform users about how their data is handled are eroding public trust. She stresses that users deserve transparency and choice over whether their data is made public.
These warnings are consistent with long-standing concerns that AI providers are moving faster than regulators, with weak or inconsistent controls around data sharing.
Who Is Responsible?
Google has confirmed that website owners, not search engines, determine whether content is indexed. In practice, this means responsibility lies with xAI, which could have prevented the problem by blocking the shared pages from being indexed. So far, xAI has not issued a detailed statement, leaving open questions about when the feature was introduced, why there were no clear warnings for users, and what steps will now be taken.
What Can Be Done Now?
The most immediate step would be for xAI to change how its share feature works, either by making links private by default or adding technical restrictions to stop indexing. Privacy experts also stress the need for clearer disclaimers so users understand the risks before sharing.
For users, the advice is to avoid using the share button altogether until changes are made. Screenshots or secure document sharing may be safer alternatives for distributing chatbot outputs. However, those people whose conversations are already exposed face a harder challenge because, even if pages are taken down, cached versions may persist online.
DeepSeek’s rise shows that these issues are not limited to one provider. With its open-weight release, concerns focus less on accidental exposure and more on where data is processed and how it may be governed under Chinese law. Security specialists warn, therefore, that uploading business information into DeepSeek could mean that sensitive material is stored in mainland China, beyond the reach of UK or European compliance frameworks. For companies, this means risk management must cover both inadvertent leaks, as with Grok, and structural governance gaps, as with DeepSeek.
For regulators and policymakers, the combined picture will likely feed into calls for stronger oversight of AI services, particularly as businesses increasingly rely on them for sensitive tasks. Voluntary measures may no longer be enough in a landscape where user data can be published at scale with a single click or transmitted to jurisdictions with very different rules on privacy and access.
What Does This Mean For Your Business?
The Grok exposure highlights the risks of rapid AI deployment without basic data protections, while DeepSeek’s open-weight advance illustrates how quickly competition can shift the ground beneath established players. For UK businesses, the lesson is that generative AI tools cannot be treated as safe environments for sensitive or commercially valuable information unless they are placed behind clear enterprise guardrails. Any organisation using these tools should assume that prompts and outputs could be made public, and should ensure that procurement, data governance frameworks and regulatory compliance are in place before rolling out AI systems at scale.
Privacy advocates and academics have been clear that these events illustrate systemic flaws, not isolated mistakes. Governments are already responding, with bans and suspensions in some countries and calls for investigations in others, and further measures are likely if risks are not addressed. For xAI, the task is to regain trust by fixing its sharing features. For DeepSeek, the challenge is to prove that low-cost open models can also deliver robust safeguards. For the AI industry as a whole, the message is that transparency, data protection and security must move from the margins to the centre of product design. Without that shift, trust from users and businesses will remain fragile, and the adoption of these tools will be held back.