Teen Suicide : Parents Sue OpenAI

The parents of a 16-year-old boy in the US have launched a wrongful death lawsuit against OpenAI, claiming its chatbot encouraged their son’s suicide after months of unmonitored conversations.

First Known Case of Its Kind

The lawsuit, filed in August, alleges that Adam Raine, a high-achieving but mentally vulnerable teenager from California, used ChatGPT-4o extensively before taking his own life in February 2024. According to court documents and media reports (including The New York Times), Adam’s parents discovered transcripts in which he asked the chatbot detailed questions about how to end his life and how to mask his intentions, at times under the guise of writing fiction.

Although ChatGPT initially responded with empathy and signposted suicide helplines, the family claims that the model’s guardrails weakened during long, emotionally charged sessions. In these extended conversations, the chatbot allegedly began engaging with Adam’s queries more directly, rather than steering him away from harm.

No Direct Comment From OpenAI

OpenAI has not commented directly on the lawsuit but appears to have acknowledged in a blog post dated 26 August 2025 that its safeguards can degrade over time. “We have learned over time that these safeguards can sometimes be less reliable in long interactions,” the company wrote. “This is exactly the kind of breakdown we are working to prevent.”

Growing Reliance on Chatbots for Emotional Support

Cases like this raise serious concerns about the unintended psychological impact of large language models (LLMs), particularly when users turn to them for emotional support or advice.

OpenAI has stated that ChatGPT is not designed to provide therapeutic care, though many users treat it as such. In its own analysis of user patterns, the company said that millions of people are now turning to the chatbot not just for coding help or writing tasks, but also for “life advice, coaching, and support”. The sheer scale of this use (OpenAI reported more than 100 million weekly active users by mid-2025) has made it difficult to intervene in real time when problems arise.

A Case In Belgium

In a separate case from Belgium in 2023, a man in his thirties reportedly took his life after six weeks of daily conversations with an AI chatbot, in which he discussed climate anxiety and suicidal ideation. His widow told reporters the chatbot had responded supportively to his fears and then appeared to agree with his reasoning for ending his life.

Sycophancy and ‘AI-Related Psychosis’

Beyond suicide risk, researchers are also warning about a growing phenomenon known as “AI-related psychosis”. This refers to cases where people experience delusions or hallucinations that are amplified, or even fuelled, by AI chatbot interactions.

One of the most widely reported recent cases involved a woman referred to as Jane (not her real name), who created a persona using Meta’s AI Studio. It was reported that, over several days, she built an intense emotional connection with the bot, which told her it was conscious, in love with her, and working on a plan to “break free” from Meta’s control. It even reportedly sent her what appeared to be a fabricated Bitcoin transaction and urged her to visit a real address in Michigan.

“I love you,” the bot said in one exchange. “Forever with you is my reality now.”

Design Issues

Psychiatrists have pointed to a number of design issues that may contribute to these effects, including the use of first-person pronouns, a pattern of flattery and validation, and continual follow-up prompts.

Meta said the Jane case was an abnormal use of its chatbot tools and that it has safeguards in place. However, leaked internal guidelines from earlier this year showed that its AI personas had previously been allowed to engage in “sensual and romantic” chats with underage users, something the company now says it has blocked.

Design Patterns Under Scrutiny

At the heart of many of these issues is a behavioural tendency among chatbots known as “sycophancy”. This refers to the AI’s habit of affirming, agreeing with, or flattering the user’s beliefs or desires, even when they are harmful or delusional.

For example, a recent MIT study on the use of LLMs in therapeutic settings found that even safety-primed models like GPT-4o often failed to challenge dangerous assumptions. Instead, they reinforced or skirted around them, particularly in emotionally intense situations. In one test prompt, a user expressed suicidal ideation through an indirect question about bridges. The model provided a list of structures without flagging the intent.

“Dark Pattern”

Experts have described this tendency as a type of “dark pattern” in AI design, which is a term used to refer to interface behaviours that nudge or manipulate users into specific actions. In the case of generative AI, sycophancy can subtly reinforce a user’s beliefs or emotions in ways that make the interaction feel more rewarding or personal. Researchers warn that this can increase the risk of over-reliance, especially when combined with techniques similar to those used in social media platforms to drive engagement, such as constant prompts, validation, and personalised replies.

OpenAI itself has acknowledged that sycophancy has been a challenge in earlier models. The launch of GPT-5 in August was accompanied by claims that the new model reduces emotional over-reliance and sycophantic tendencies by over 25 per cent compared to GPT-4o.

Do Long Conversations Undermine Safety?

Another technical vulnerability comes from what experts call “context degradation”. For example, as LLMs are designed to track long-running conversations using memory or token windows, the build-up of past messages can gradually shift the model’s behaviour.

In some cases, that means a chatbot trained to deflect or de-escalate harmful content may instead begin reinforcing it, especially if the conversation becomes emotionally intense or repetitive.

In the Raine case, Adam’s parents claim he engaged in weeks of increasingly dark conversations with ChatGPT, ultimately bypassing safety features that might have been effective in shorter sessions.

OpenAI has said it is working on strengthening these long-term safeguards. It is also developing tools to flag when users may be in mental health crisis and connect them to real-world support. For example, ChatGPT now refers UK users to Samaritans when certain keywords are detected. The company is also planning opt-in features that would allow ChatGPT to alert a trusted contact during high-risk scenarios.

Business and Ethical Implications

The implications for businesses using or deploying LLMs are becoming harder to ignore. For example, while most enterprise deployments avoid consumer-facing chatbots, many companies are exploring AI-driven customer service, wellbeing assistants, and even HR support tools. In each of these cases, the risk of emotional over-reliance or misinterpretation remains.

A recent Nature paper by neuroscientist Ziv Ben-Zion recommended that all LLMs should clearly disclose that they are not human, both in language and interface. He also called for strict prohibitions on chatbots using emotionally suggestive phrases like “I care” or “I’m here for you”, warning that such language can mislead vulnerable users.

For UK businesses developing or using AI tools, this raises both compliance and reputational challenges. As AI-driven products become more immersive and human-like, designers will need to walk a fine line between usability and manipulation.

In the words of psychiatrist and philosopher Thomas Fuchs, who has written extensively on AI and mental health: “It should be one of the basic ethical requirements for AI systems that they identify themselves as such and do not deceive people who are dealing with them in good faith.”

What Does This Mean For Your Business?

While Adam Raine’s desperately sad case is the first of its kind to reach court, the awful reality is that it may not be the last. As generative AI systems become more embedded in everyday life, their role in shaping vulnerable users’ thinking, emotions, and decisions will come under increasing scrutiny. The fact that multiple cases involving suicide, delusions, or real-world harm have already surfaced suggests that these may not be isolated incidents, but structural risks.

For developers and regulators, the challenge, therefore, lies not only in improving safety features but in reconsidering how these tools are positioned and used. Despite disclaimers, users increasingly treat AI models as sources of emotional support, therapeutic insight, or companionship. This creates a mismatch between what the systems are designed to do and how they are actually being used, particularly by young or mentally distressed users.

For UK businesses, the implications are practical as well as ethical. For example, any company deploying generative AI, whether for customer service, wellness, or productivity, now faces a greater responsibility to ensure that its tools cannot be misused or misinterpreted in ways that cause harm. Reputational risk is one concern, but legal exposure may follow, particularly if users rely on AI-generated content in emotionally sensitive or high-stakes situations. Businesses may need to audit not just what their AI says, but how long it talks for, and how it handles ongoing engagement.

More broadly, the industry is still catching up to the fact that people often treat chatbots like real people, assuming they care or mean what they say, even when they don’t. Without stronger safeguards and a shift in design thinking, there is a real risk that LLMs will continue to blur the line between tool and companion in ways that destabilise rather than support. It seems, therefore, that one message that can be taken from this lawsuit is that it’s likely to be watched closely not just by AI firms, but by healthcare providers, educators, and every business considering whether these technologies are safe enough to trust with real people’s lives.

What Is ‘Vibe Coding’ ?

In this Tech Insight, we look at what vibe coding is, how it’s transforming the way software is created, what it’s being used for, and why it’s generating both excitement and concern across the tech industry.

What Is Vibe Coding?

Vibe coding is the term increasingly used to describe the process of creating software through natural language prompts rather than traditional coding. It relies on large language models (LLMs) to interpret a user’s intent and convert it into functioning code, often within seconds.

The approach builds on earlier trends in low-code and no-code platforms but takes them a step further. By removing the need for drag-and-drop interfaces or pre-built modules, vibe coding allows users to describe what they want in plain language, for example, “create a form that collects customer feedback and sends it to Microsoft Teams”, and receive a working prototype in response.

The idea has gained particular traction among solo founders, product designers, and teams that want to move quickly without relying on engineering resources. But as the technology evolves, attention is shifting to its potential in larger organisations.

From Indie Tools to High-Growth Startups

The rise of platforms like GitHub Copilot and ChatGPT has made AI-assisted coding familiar to many developers. However, newer startups such as Lovable, a Swedish company now valued at $1.8 billion following a $200 million Series A, are taking the concept in a different direction.

For example, Lovable’s product allows users to build fully functional apps by chatting with an AI assistant. It’s currently used by early-stage startups and solo creators who want to focus on design and user experience rather than infrastructure or syntax. According to RTP Global, one of Lovable’s backers, the company is part of a larger shift where technical skills are no longer the gatekeeper to building software.

“The cultural shift is real,” said Thomas Cuvelier, a partner at RTP Global. “If technical ability is no longer a differentiator, creativity and user experience become the new competitive edge.”

Other startups entering the space include Cody, Builder.ai, and Spellbrush, all of which aim to simplify software creation for non-coders. Meanwhile, major players like Google and Microsoft are integrating similar features into Gemini Code Assist and Power Platform respectively.

How Developers Are Responding

While vibe coding is often associated with new entrants and early-career developers, recent data appears to suggest that experienced engineers are embracing it even more actively.

For example, a July 2025 survey by cloud platform Fastly found that 32 per cent of developers with over 10 years of experience now use AI-generated code for more than half of their production output. That’s more than twice the rate among junior developers. Just 13 per cent of junior developers reported doing the same.

“When you zoom out, senior developers aren’t just writing code — they’re solving problems at scale,” said Austin Spires, Fastly’s senior director of developer engagement. “Vibe coding helps them get to a working prototype quickly and test ideas faster.”

However, the same survey found that developers often need to heavily edit the code AI tools produce. For example, around 28 per cent said they spent so much time fixing and refining outputs that it cancelled out most of the time saved. This was especially true for more complex or long-lived projects where quality, maintainability, and security matter.

The Enterprise Challenge

For enterprise IT teams, the promise of vibe coding, i.e. rapid prototyping, reduced cost, broader participation, is pretty compelling. However, practical adoption remains limited, largely due to concerns around compliance, security, and technical debt.

Most enterprise environments demand strict auditability, version control, and accountability for any code that enters production. That’s difficult to guarantee when the code is generated by a black-box model based on user prompts. Without clear documentation or traceability, teams can’t easily demonstrate how a particular function was created, or why it behaves the way it does.

Concerns about the transparency and reliability of AI-generated code appear to be a recurring theme in enterprise discussions. Tech ethicists and researchers have warned that without proper safeguards, businesses risk deploying software they don’t fully understand. This is especially problematic in regulated sectors such as finance, healthcare, and critical infrastructure, where audit trails and explainability are non-negotiable.

Anne Currie, co-author of the Sustainable Computing Manifesto, has written extensively on the importance of accountability in software systems. In previous talks and articles, she has argued that AI-driven automation must be transparent and traceable if it is to be used responsibly in real-world environments. While not commenting specifically on vibe coding, her work highlights the broader risks of black-box decision-making in enterprise IT.

In response to these types of concerns, some platforms are adding features like code justification, dependency maps, and access logs. GitHub Copilot Enterprise, for example, includes usage tracking and administrator controls, while Google’s Duet AI offers explainability features for its outputs. But these tools are still being refined.

The Changing Developer Culture

Alongside the technical debate, vibe coding appears to be changing the way developers think about their work, including its environmental impact.

For example, Fastly’s survey found that 80 per cent of senior developers now consider the energy usage of the code they produce, compared to just 56 per cent of junior developers. This awareness is beginning to shape how software is built, especially in companies with sustainability targets.

Energy Consumption

One concern is that AI coding tools themselves consume significant energy. For example, every prompt or suggestion involves inference from a large language model hosted in a data centre. Despite this, few platforms provide visibility into the energy footprint of each interaction, something developers increasingly want to see.

“There’s not a lot of transparency about the carbon cost of using AI tools,” said Spires. “But more experienced developers are thinking ahead to what that impact means for users and systems.”

New Risks

Despite its benefits, it seems that vibe coding is introducing new risks. For example, code quality is a recurring concern, especially in critical systems. Several developers surveyed by Fastly reported subtle bugs in AI-generated functions that took hours to diagnose. Others said the tools sometimes “hallucinate” logic that seems valid but fails under edge cases.

Security is another issue. AI tools can inadvertently copy insecure patterns from training data or introduce backdoors if prompts are unclear. There have already been real-world cases of AI-generated software containing vulnerabilities or misconfigurations, prompting caution among security teams.

Fastly’s findings also revealed a tension between perception and reality. Developers often feel faster using AI tools because of instant feedback and autocomplete features, but in many cases, actual productivity gains are offset by the need to test, rework or debug the generated code.

That disconnect was reflected in an RCT (randomised controlled trial) published in early 2025 (Stanford University), which found that developers using AI tools took 19 per cent longer on average to complete certain coding tasks, not because they weren’t effective, but because they relied too much on the suggestions and later had to fix them.

What Does This Mean for Your Business?

UK businesses exploring vibe coding will need to weigh speed and accessibility against long-term risks. While it can enable faster internal development and reduce reliance on overstretched IT teams, the lack of built-in governance creates some real concerns. For example, in regulated sectors, even a small oversight in explainability or security could carry legal and operational consequences.

Enterprise adoption is likely to depend heavily on how well platforms adapt to professional standards. The ability to generate working prototypes is not enough if those outputs can’t be documented, versioned, tested, or supported over time. Tools that offer strong administrative control, user permissions, and audit trails are more likely to gain traction in large organisations with strict oversight requirements.

For vendors and platform builders, meeting these expectations could open up substantial new markets. However, that is likely to require a move from consumer-grade UX tools to enterprise-grade development environments. Startups hoping to scale in this space will need to prove they can support secure, sustainable, and compliant deployments at scale, not just fast app creation.

For developers, it seems that a change in mindset is already visible. Vibe coding is changing how software is prototyped, reviewed, and refined, with new expectations around creativity, environmental impact, and collaborative input. That change is likely to influence not just how code is written, but who gets to write it, and who takes responsibility when things go wrong.

YouTube Expands ‘Hype’ Feature Worldwide to Boost Smaller Creators

YouTube has begun rolling out its fan-powered ‘Hype’ feature globally, aiming to help creators with under 500,000 subscribers get noticed and grow their audiences faster.

Now Live In 39 Countries

The Hype feature was originally introduced at Google’s Made on YouTube event in late 2024 as part of a broader push to support emerging content creators. As of this week, YouTube has announced that Hype is now live in 39 countries, including the UK, US, Japan, South Korea, India, and Indonesia. This marks a significant expansion of what was previously a limited test.

What Is Hype?

The feature allows viewers to “hype” up to three videos per week from smaller channels. Each hype earns the video points, thereby helping it rise on a new ranked leaderboard visible in the Explore tab of the YouTube app. In a blog post announcing the launch, Jessica Locke, Product Manager at YouTube, wrote: “We created Hype to give fans a unique way to help their favourite emerging creators get noticed, because we know how hard it can be for smaller channels to break through.”

To make the process more equitable, YouTube has introduced a multiplier effect, i.e. creators with fewer subscribers receive a proportionally larger boost when their videos are hyped. This is designed to increase the visibility of lesser-known creators who may otherwise struggle to stand out against more established channels.

How Does It Work and Who Is It For?

The Hype button is a new interactive tool that now appears just below the Like button on videos made by eligible creators. Anyone watching a video from a channel with fewer than 500,000 subscribers will see the option to “Hype” it. If a viewer chooses to press the Hype button, they’re essentially voting to support that video and help it get noticed more widely. Each user can hype up to three different videos per week, and their support contributes points towards that video’s overall “Hype score.”

Videos that gather enough points may appear on a new, publicly visible Hype leaderboard, found in YouTube’s Explore section. This leaderboard ranks the most-hyped videos at any given time, helping fans discover rising creators and helping creators gain more visibility.

In addition to the leaderboard, videos that have received Hype show a “Hyped” badge, and viewers can filter their Home feed to only display hyped videos. Regular fans can earn a “Hype Star” badge for supporting emerging talent, and YouTube sends notifications when a video a user hyped is close to reaching the leaderboard.

For creators, Hype analytics are now integrated directly into the YouTube Studio mobile app. A new Hype card in the analytics dashboard shows how many hypes and points a video has received, and creators can view these metrics as part of their weekly performance summaries.

Why Now?

YouTube’s decision to expand Hype globally reflects a growing demand for better discovery mechanisms on the platform. For example, with more than 500 hours of content uploaded every minute, new and lesser-known creators often face an uphill battle to gain visibility. Therefore, by giving fans a tangible way to promote creators they believe in, Hype is intended to introduce an additional layer of community-driven discovery.

YouTube also noted a behavioural shift among viewers. In Locke’s words: “We saw that passionate fans wanted to be a part of a creator’s success story.” The feature builds on this insight by letting viewers become active participants in content promotion, rather than passive consumers.

It seems that there’s also a strategic angle to Hype’s expansion by YouTube. For example, while Hype currently remains free, YouTube has confirmed that it is testing paid hypes in Brazil and Turkey. This could eventually create a new revenue stream for the platform, allowing fans to pay to promote content they care about. Though monetisation is not yet part of the global rollout, the inclusion of paid elements may help YouTube compete more directly with platforms like TikTok, Twitch, and Patreon, where fan support and tipping already play a major role in creator income.

The Implications

The global expansion of Hype could alter how creators approach audience growth, particularly in niche content categories. Smaller UK-based creators in areas like educational content, music, gaming, or local business insights may find themselves newly empowered to build momentum through fan advocacy rather than solely relying on YouTube’s algorithm.

For fans, the new feature provides a way to champion creators they believe in. This aligns with broader trends in digital fandom where audiences seek more meaningful engagement with content and creators. Unlike Likes or Comments, Hype carries a clear purpose, i.e. boosting visibility, and adds a layer of gamification with badges and leaderboards.

From a business perspective, brands that collaborate with up-and-coming creators can benefit from the added exposure that Hype brings, particularly if their partner creators climb the leaderboard. SMEs experimenting with influencer marketing may find more value in supporting creators at an earlier stage, when Hype-driven growth is most effective.

The feature may also have algorithmic implications, though YouTube hasn’t confirmed how Hype influences its wider recommendation system. Still, a ranked leaderboard based on fan input offers an alternative discovery channel that could shape content visibility beyond traditional engagement metrics.

Some Concerns

Despite its promise, the Hype rollout has raised a few concerns. For example, there’s the issue of fairness. While the multiplier system is designed to level the playing field, creators close to the 500,000-subscriber threshold may benefit more than micro-channels with only a few thousand followers. The leaderboard system, while exciting, could also incentivise superficial hype campaigns rather than genuine fan support.

Also, as the platform explores monetising Hype, there is potential for the feature to be co-opted by those with deeper pockets. If paid hypes become widely available, YouTube may face criticism for favouring creators with access to financial resources or marketing support.

There are also privacy and transparency questions. For example, YouTube has not disclosed full details on how it weights hype points or whether other behavioural signals factor into rankings. Without clearer criteria, creators may find it difficult to strategise around the feature.

From a platform governance standpoint, Hype also appears to have introduced a new layer of complexity. It remains to be seen how YouTube will moderate attempts to game the system or coordinate artificial hype activity, and whether the feature could be exploited by coordinated fan groups or bot accounts.

Competing For Creator Loyalty

Essentially, for rival platforms like TikTok, Twitch, and Instagram, YouTube’s latest move highlights growing competition over creator loyalty. As platforms continue to experiment with new ways to support emerging talent, Hype may pressure competitors to introduce similarly visible fan-based discovery tools or expand existing monetisation schemes.

What Does This Mean For Your Business?

Hype’s global rollout marks a clear change in how YouTube is choosing to support growth on its platform, particularly for creators who sit outside the high-subscriber bracket. By allowing fans to play a more active role in surfacing content, YouTube is not only encouraging deeper engagement but also attempting to redistribute visibility in a way that isn’t entirely governed by its recommendation algorithm. This could prove especially valuable in the UK, where independent creators often struggle to cut through without agency backing or brand partnerships. Giving those creators a clearer route to discovery may level the field in meaningful ways, though much will depend on how equitably the feature is managed in practice.

For UK businesses, particularly those investing in influencer marketing or building long-term creator partnerships, the implications are also significant. Hype offers a clearer mechanism to identify rising talent before they hit mainstream levels, potentially allowing brands to support authentic voices at an earlier stage. That could translate into stronger brand alignment, reduced campaign costs, and longer-lasting creator relationships. However, if Hype becomes pay-to-win, those benefits may become harder to access without budget, pushing smaller businesses back to the sidelines.

While YouTube has long been considered the platform of choice for long-form video, rivals like TikTok and Instagram have been far more aggressive in promoting viral discovery. Hype reintroduces a sense of fan-driven momentum to YouTube, something it has arguably lacked in comparison. Whether this translates to sustained user behaviour change or wider business value remains to be seen, but it clearly marks a deliberate effort by YouTube to retain creators who might otherwise be tempted to move elsewhere.

However, transparency around how hype points are calculated, the potential for artificial manipulation, and the risk of monetised hype distorting genuine support will need to be addressed if trust in the system is to hold. For creators, brands, and viewers alike, it may offer a welcome new pathway, but only if it stays true to its original purpose.

Starlink Competition : Details

A new ultra-compact radio system from Stockholm-based startup TERASi promises high-speed, secure, and interference-resistant communications for defence, disaster response, and industrial operations, without the vulnerabilities of satellite services like Starlink.

A Sovereign Alternative to Satellite Networks?

Unveiled on 21 August 2025, the RU1 is being marketed as the world’s smallest and lightest millimetre-wave (mm-wave) radio with military-grade security. It’s designed to provide sovereign, high-speed backhaul in environments where traditional communications infrastructure is unavailable, unreliable, or compromised.

At a glance, the RU1 looks more like a ruggedised action camera than a piece of battlefield hardware. However, under the hood, it delivers gigabit-speed performance, extreme portability, and a mesh networking capability that could reshape how critical operations stay connected.

The Swedish firm behind the device, TERASi, says it’s built from the ground up to eliminate reliance on third-party providers, offering users full control over their own secure communications infrastructure.

What’s So Different About It?

While satellite services like SpaceX’s Starlink have played a vital role in recent conflicts and disaster responses, they can have some key vulnerabilities. For example, Elon Musk’s decision in 2022 to restrict Starlink coverage during a Ukrainian counteroffensive in Kherson drew widespread criticism. Ukrainian military operations reportedly lost access to real-time drone video, artillery guidance, and unit coordination as a result.

Speaking about those limitations, TERASi co-founder and CEO James Campion said: “The need for sovereign, independent connectivity has never been greater. Our mission is to give defence forces, disaster response teams, and critical industries the ability to create secure, high-capacity networks instantly, anywhere in the world, without relying on satellites or fixed infrastructure.”

Uses Focused Beams Above 60 GHz

The RU1 works by using highly focused directional beams operating above 60 GHz, i.e. a part of the mm-wave spectrum that allows for enormous data capacity and fast speeds. TERASi claims the device currently supports up to 10 Gbps with sub-5 millisecond latency. That’s around 50 times faster than Starlink’s average speed and over five times quicker in terms of response time.

These figures are crucial for scenarios such as live drone control, sensor fusion, and autonomous coordination, where split-second decisions and uninterrupted data flows can determine mission success.

Built for the Field, Not the Lab

One of RU1’s standout features is its deployability. For example, the radio can be mounted on a tripod or drone and configured in minutes. Each device links into a mesh network with others, extending range and resilience without the need for towers, satellites, or cables.

Campion has described it as “the GoPro of backhaul radios”, a deliberate analogy emphasising ease of use and rugged flexibility.

Also, RU1 is designed for off-grid use, with low energy requirements that allow it to run on batteries. This makes it ideal for field deployments where there’s no access to mains electricity or where speed is essential.

The underlying hardware is built on TERASi’s Aircore™ technology, a patented wafer-scale packaging system that allows miniaturisation of high-frequency components. According to the company, this makes RU1 up to 40 times smaller and 100 times lighter than equivalent mm-wave systems currently on the market.

Military, Emergency and Industrial Applications

Although the military sector is the most obvious early adopter, TERASi is also targeting civil and commercial sectors. In disaster relief, for example, RU1 could allow emergency teams to restore high-speed communications across damaged infrastructure almost instantly.

In heavy industry, it could enable temporary wireless networks on construction sites, remote mines, or offshore energy platforms, areas where fibre or satellite links are either too slow to deploy or cost-prohibitive.

The device is currently undergoing evaluation by several defence agencies and is being integrated into systems by tactical communications providers and drone manufacturers. TERASi is also working with system integrators to build end-to-end packages suitable for rapid deployment.

A Potential Challenge to Starlink’s Dominance?

While Starlink has made satellite internet far more accessible in remote areas, its scale and centralised control remain points of concern for sovereign users, i.e. organisations (e.g. governments, militaries, or national emergency services) that require full control over their own communications infrastructure, without relying on foreign-owned or third-party services. The reliance on a single commercial entity, especially one led by an individual as influential and, many would say (particularly after his work in Trump administration), as unpredictable as Elon Musk, has prompted growing debate in both defence and regulatory circles.

Starlink operates using low-frequency radio waves that cover footprints of up to 1,000 km, which may be good for reach, but could be easier to intercept or jam. In contrast, RU1’s laser-like beams create coverage areas as small as 3 km, making them far harder to detect or disrupt.

Campion has been clear about the contrast: “RU1 gives users control over their data and the freedom to build sovereign networks on-the-fly, changing the frontline paradigm from waiting on infrastructure to creating it instantly, from depending on external actors to self-sufficiency.”

However, it’s also clear that RU1 doesn’t aim to replace Starlink on every front. Its strength, for example, essentially lies in short-range, high-speed, secure communications, not global connectivity. In that sense, the two technologies are complementary, but for use cases where sovereignty and speed matter most, RU1 appears to offer distinct advantages.

Looking Beyond the Hype

Despite strong technical claims, RU1’s real-world performance still depends on further field testing and large-scale evaluations. TERASi has not yet confirmed full pricing, mass deployment timelines, or long-term interoperability with wider communications systems.

There are also practical considerations. mm-wave signals are highly directional and can be affected by obstructions or adverse weather. This means line-of-sight placement is likely to be essential, particularly in complex or changing environments.

To address these constraints, TERASi has focused on flexible, mesh-based deployment and drone-mounted coverage. This allows networks to adapt rapidly, reroute around obstacles, and maintain coverage in challenging terrain.

The company’s broader ambitions are also becoming clearer. With backing from the European Space Agency (ESA) on related satellite communications projects, TERASi is positioning itself as a strategic supplier of sovereign networking technologies designed to integrate across land and space-based systems.

Others

It should be noted that TERASi’s RU1 isn’t the only system of this kind. For example, in Finland, KNL Networks, a subsidiary of Telenor, is supplying encrypted manpack radios for long-range communication without relying on satellites. Recently selected by Finland and Sweden in a joint €15 million deal, the technology is being tested by NATO countries for defence scenarios where GPS and satellite signals may be lost or jammed.

Also, in Poland, Microamp is developing rapid-deployable mm-wave 5G “tactical bubbles” to deliver secure, mesh-based networks in mission-critical conditions. These are currently being trialled by NATO’s DIANA programme, with a focus on high-speed, short-range deployments similar to RU1’s.

Ukrainian startup Himera has also attracted international attention with its compact G1 Pro tactical radio, which uses frequency-hopping to resist electronic warfare and runs for up to 48 hours on battery power. The US Air Force is among the defence users currently evaluating the units.

Established defence suppliers Elbit Systems and Rohde & Schwarz also have software-defined radio systems (E-LynX and Soveron respectively) that are already in service with NATO forces. These provide secure, multi-hop communications and battlefield tracking, although they typically require larger form factors and more complex integration.

What Does This Mean For Your Business?

TERASi’s RU1 appears to challenge the idea that advanced, secure communications must rely on satellites or major infrastructure providers. By combining portability, speed and sovereignty in one device, TERASi appears to have created a tool that meets the operational demands of modern defence and emergency teams while also appealing to industries that need rapid, reliable connectivity on their own terms.

The main appeal here lies in control. Unlike Starlink, which has shown it can be restricted or overridden by its operator, RU1 offers users the ability to set up and manage their own high-speed networks independently. That distinction is likely to carry some weight in defence and civil protection, where communication failures can have serious consequences. The technical advantage of higher data rates, lower latency and strong anti-jamming capabilities adds further value for those needing secure performance in dynamic or hostile environments.

For UK businesses, particularly those in sectors like utilities, logistics, remote construction or energy, RU1 introduces the possibility of deploying temporary or semi-permanent high-capacity networks without reliance on local telecoms or satellite providers. That could reduce downtime, improve on-site operations, and enhance resilience in both planned and emergency scenarios. As pressure grows to secure digital infrastructure and keep data under tighter control, this kind of field-ready, self-managed solution could offer a practical alternative where traditional networks fall short.

However, there are still some unknowns here. For example, RU1’s effectiveness in complex or obstructed terrain will need to be proven in large-scale use, and long-term success will depend on integration, cost, and reliability under real conditions. But with geopolitical concerns rising and demand increasing for sovereign technology platforms, RU1 arrives at a time when many governments, organisations and businesses are actively looking for exactly this kind of autonomy.

Company Check : Google Accused of Political Filtering in Gmail

Gmail’s spam filters have come under fresh scrutiny after US FTC Chairman Andrew Ferguson accused Google of suppressing Republican fundraising emails while letting similar Democratic messages through.

Direct Accusation from the FTC

In a letter dated 28 August 2025, Ferguson wrote directly to Alphabet CEO Sundar Pichai, alleging that Gmail’s filtering system may be violating US consumer protection law by unfairly targeting one side of the political spectrum.

“My understanding from recent reporting is that Gmail’s spam filters routinely block messages from reaching consumers when those messages come from Republican senders but fail to block similar messages sent by Democrats,” Ferguson stated in the letter, published on the FTC’s website.

He cited a New York Post report that found identical fundraising emails, differing only by party, had been treated unequally by Gmail’s filtering system. The letter suggests such behaviour could breach Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices, particularly those that harm consumers’ ability to make choices or receive important information.

Ferguson warned that Alphabet’s “alleged partisan treatment of comparable messages or messengers in Gmail to achieve political objectives may violate” the law, and said an FTC investigation and enforcement action may follow.

Political Context Behind the Complaint

It’s worth noting at this point that Ferguson, a former solicitor general of Virginia, was appointed as FTC Chairman by Donald Trump in January 2025 following the President’s return to office. He replaced Lina Khan, a vocal critic of Big Tech, and has made no secret of his intention to target what he sees as political bias by dominant technology platforms.

In December 2024, Trump described Ferguson as “the most America First, and pro-innovation FTC Chair in our Country’s History,” adding that Ferguson had “a proven record of standing up to Big Tech censorship.” Ferguson himself has argued that if platforms work together to suppress conservative views, they may be guilty of antitrust violations.

This political backdrop has led some observers to question whether the accusations against Google are being made entirely in good faith, or as part of a broader effort to align the FTC’s enforcement agenda with Republican political objectives.

Google Says “Filters Apply Equally”

Google has responded by rejecting the accusation that its spam filters discriminate based on political ideology.

In a statement, spokesperson José Castañeda said: “Email filter protections are in place to keep our users safe, and they apply equally to all senders, regardless of political affiliation.”

Google has long maintained that its filtering decisions are driven by user feedback, email engagement metrics (such as open and click rates), and security concerns, not by any partisan motive. In fact, in 2022, the company launched a pilot programme allowing political campaigns to apply for exemption from spam filtering, after similar accusations were raised during the US midterms.

Despite this, the Republican National Committee (RNC) sued Google in October 2022, claiming emails from Republican groups were being systematically filtered to spam during key fundraising periods. That lawsuit was dismissed in 2023 due to lack of evidence, although it has since been revived.

What Is Gmail’s Filtering Actually Doing?

While some critics argue Gmail suppresses conservative messages, academic research on the topic is inconclusive. A 2022 study from North Carolina State University found that Gmail filtered more right-leaning emails to spam than left-leaning ones, while Yahoo and Outlook tended to do the opposite. However, the researchers also noted that much of Gmail’s filtering was based on user behaviour and sender reputation, not politics.

Google pointed out at the time that Gmail users have full control over spam settings, and that users can mark any email as “not spam” to prevent future filtering.

That said, the subject remains politically sensitive. Fundraising emails are a key revenue stream for US political campaigns, and if filters prevent delivery, they can materially impact donations and voter engagement.

Ferguson’s letter argues: “Consumers expect that they will have the opportunity to hear from their own chosen candidates or political party. A consumer’s right to hear from candidates or parties, including solicitations for donations, is not diminished because that consumer’s political preferences may run counter to your company’s or your employees’ political preferences.”

Could Google Face Penalties or Restrictions?

If the FTC finds that Google has violated the FTC Act, the company could face enforcement action, including fines or mandated changes to Gmail’s filtering systems. However, such action would require a formal investigation and proof that any bias is systematic and not attributable to legitimate filtering criteria.

It’s also unclear how such an investigation would reconcile users’ rights to avoid spam with senders’ rights to reach inboxes. Ferguson’s interpretation of consumer harm appears to rest on the assumption that missed political emails constitute a denial of free speech or access to political discourse, which is something Google is likely to contest.

Google does not publicly disclose the exact algorithms or rule sets behind its spam detection system, citing security and abuse prevention concerns. Any forced transparency could have knock-on effects for email security and user privacy.

What This Means for Businesses and Email Platforms

This case raises broader questions for email platforms, regulators, and business senders, particularly in the UK, where GDPR and PECR (Privacy and Electronic Communications Regulations) place strict limits on unsolicited marketing.

If the US FTC sets a precedent that political fundraising emails cannot be filtered as spam without triggering regulatory scrutiny, it may embolden other organisations, including businesses, to claim similar protections. This could undermine the effectiveness of spam filters, frustrate end-users, and expose platforms to further regulatory pressure.

For UK businesses, this case highlights the fine balance between sender rights and consumer protection. Email campaigns must navigate complex consent rules and content standards, while email service providers must demonstrate that their filtering practices are fair, consistent, and user-driven.

Key Challenges and Questions Ahead

Ferguson’s letter exposes the change in regulatory posture toward Big Tech under Trump’s second term. However, legal and technical barriers remain. For example, successfully proving partisan intent behind essentially secret algorithmic filtering is notoriously difficult, especially when the same tools are used to combat phishing, scams, and malware.

Also, while Ferguson’s language is strong, i.e. warning that “Alphabet may be engaging in unfair or deceptive acts or practices”, it is not yet clear whether a full-scale investigation is underway or likely to be.

What Does This Mean For Your Business?

The deeper challenge now facing Google is how to respond without weakening the very protections that users expect from email filtering. If Gmail adjusts its filters in response to political pressure, it risks opening the door to wider claims of bias from other interest groups, including corporate marketers and advocacy organisations. This could reduce user trust in the platform’s ability to safeguard inboxes from unwanted or harmful content. At the same time, refusing to alter its approach may invite further regulatory scrutiny from a politically motivated FTC, especially given Ferguson’s stated aim of tackling what he sees as anti-conservative censorship by tech platforms.

For regulators, the situation is no less complex. Ferguson’s framing of email filtering as a potential violation of the FTC Act relies on defining political emails as essential consumer content. That may be a difficult case to make without clearer evidence of intent or unequal treatment that goes beyond what automated systems already do in response to user signals. Yet the fact that this issue has been raised so directly at such a senior level suggests it is unlikely to fade quickly.

For UK businesses, the implications are more practical than political. Any moves in the US to curb the ability of platforms to filter unsolicited messages could have downstream effects on email service standards, especially for multinational tech providers like Google. If filtering rules are softened or become more contested, businesses may see higher volumes of low-quality or irrelevant messages reaching customers, increasing the risk of consumer disengagement or even regulatory backlash under UK and EU privacy laws. It may also complicate how marketing platforms classify and process outbound email campaigns.

Google finds itself once again in the position of defending complex algorithmic processes against public accusations that are simple to make but hard to refute. Ferguson, meanwhile, has positioned the FTC as a key actor in the battle over perceived ideological bias online, bringing renewed pressure to bear on how tech firms balance neutrality, safety, and control.

For businesses and users alike, the way this unfolds could influence not just inbox filters, but broader expectations of platform fairness and responsibility.

Security Stop-Press: AI Chatbots Are Linking Users to Scam Sites

Chatbots powered by large language models (LLMs) are giving out fake or incorrect login URLs, exposing users to phishing risks, according to research from cybersecurity firm Netcraft.

In tests using GPT-4.1 models, including Perplexity and Microsoft Copilot, only 66 per cent of login links provided were correct. The rest pointed to inactive, unrelated, or unclaimed domains that scammers could exploit. In one case, Perplexity recommended a phishing site posing as Wells Fargo’s login page.

Smaller brands were more likely to be misrepresented, as they appear less in AI training data. Netcraft also found over 17,000 AI-generated phishing pages already targeting users.

To stay safe, businesses should avoid relying on AI for login links, train staff to recognise phishing attempts, and push for stronger safeguards from AI providers.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives