Meta And Google Found Liable In Landmark Social Media Addiction Case
A US jury has just found Meta Platforms and Google liable for harm linked to addictive platform design, marking a pivotal moment in how social media companies may be held accountable.
What Just Happened?
A Los Angeles jury has concluded that Meta and Google were responsible for harm suffered by a young woman who developed compulsive use of Meta-owned Instagram and Google’s YouTube from an early age.
In the case, the US-based plaintiff, now aged 20 and identified in court documents as “Kaley” or “KGM” (her full identity has not been publicly disclosed), said she began using YouTube at six and Instagram at nine, later experiencing anxiety, depression and body image issues. Jurors awarded $6m in damages, split between compensatory and punitive elements, and found that Instagram and YouTube had acted with what was described in court as malice, oppression or fraud.
Crucially, the jury determined that the platforms’ design was a substantial factor in causing harm, rather than focusing on the specific content viewed.
Why This Case Is Being Treated As A Milestone
What makes this case so noteworthy is that it is one of the first cases of its kind to reach a full jury verdict, and it is widely seen as an early indicator of a much larger wave of litigation.
There are already more than a thousand similar claims progressing through US courts, involving families, schools and public authorities. Legal experts expect this ruling to influence how future cases are argued, how damages are assessed, and whether companies choose to settle rather than go to trial.
Some legal commentators have also framed this moment as a broader turning point for the technology sector, comparable to earlier cases in other industries where product design and long-term harm became central to accountability.
As one of the lawyers representing the plaintiff stated after the verdict, “no company is above accountability when it comes to our children,” reflecting a wider sentiment that the legal threshold for responsibility may now be changing.
The Shift From Content To Design
One of the most important aspects of the case is actually what it did not focus on. US law has long protected technology companies from liability for user-generated content, limiting legal exposure in many previous cases. Instead, this case examined how platforms are built.
This distinction could prove significant beyond this single case. Legal protections such as Section 230 in the US have historically shielded platforms from responsibility for content, but a growing focus on design may place aspects of those protections under increased scrutiny.
The plaintiff’s legal team argued that features such as infinite scrolling, autoplay videos and constant notifications were intentionally designed to maximise engagement and keep users returning. These features are now common across most digital platforms, and are often described as engagement tools.
The jury accepted that these design choices could create patterns of compulsive use, particularly among younger users. As one expert witness described during proceedings, the question at the centre of the case was effectively how platforms are designed to ensure “a child never puts the phone down,” framing the issue as one of engineering rather than behaviour.
In Their Defence
Both Meta and Google have said they disagree with the verdict and plan to appeal.
Meta has argued that mental health is complex and cannot be attributed to a single factor, while also pointing to its policies restricting under-13s from using its platforms. During testimony, its leadership maintained that their products are intended to have a positive impact.
Google’s defence focused on positioning YouTube as a video platform rather than a traditional social network, and questioned whether the usage patterns described in the case met the threshold for addiction.
These arguments are likely to form the basis of ongoing appeals and future legal disputes.
A Wider Pattern Of Legal And Political Pressure
It’s worth noting here that this verdict follows closely behind another US ruling that found Meta liable in a separate case involving child safety and harmful content exposure.
Notably, other major platforms involved in similar litigation, including TikTok and Snap, chose to settle before trial, which may indicate the level of legal and financial risk companies now associate with these claims.
At the same time, governments are increasingly exploring regulatory action. In the UK, for example, proposals to restrict social media access for under-16s are under active consideration, while Australia has already introduced measures targeting youth access and platform design.
Political leaders, including Keir Starmer, have signalled that the current approach to social media regulation may not be sufficient. He recently stated that the status quo is “not good enough,” indicating that further intervention is likely.
Campaign groups and families involved in similar cases argue that responsibility is beginning to move away from individuals and towards the companies designing these platforms.
Why This Matters Beyond Social Media
For technology companies more broadly, this case highlights a growing legal focus on how digital products are designed, not just how they are used.
Courts are increasingly treating platform design as a series of deliberate choices rather than neutral features, meaning those decisions may carry legal and ethical consequences in the same way as other product design decisions.
Many business models rely on capturing attention and encouraging repeated engagement. Techniques that support this, such as personalised recommendations and continuous content feeds, are widely used across sectors including media, retail and software.
This also seems to highlight the tension in social media platforms between user wellbeing and commercial performance. Features that maximise engagement are often closely tied to advertising revenue and platform growth, which means any legal pressure to change them could have direct business implications.
The risk here is that these same techniques could now face greater scrutiny if they are seen to contribute to harm, particularly where younger or vulnerable users are involved.
This could lead to a reassessment of how engagement is measured and prioritised within digital services.
What Does This Mean For Your Business?
This ruling signals that digital design choices are becoming a matter of legal and commercial risk, not just user experience.
For Meta Platforms, Google, and other major platforms such as TikTok and Snap Inc., it raises the prospect of sustained legal exposure. This case is widely expected to influence hundreds of similar lawsuits, increasing the likelihood of further damages, settlements, and pressure to redesign core product features that drive engagement.
Businesses that operate platforms, apps or online services should now perhaps begin to review how their products encourage user behaviour, particularly if they rely heavily on notifications, recommendations or continuous scrolling. Features that were once seen as standard may now require clearer justification, stronger safeguards, and potentially formal risk assessments, especially where younger users are involved.
There is also a broader reputational consideration here. Public expectations are changing, and organisations seen to prioritise engagement over user wellbeing may face increased scrutiny from customers, regulators and partners. For large platforms, this could translate into tighter regulation, limits on certain design practices, and closer oversight of how algorithms influence behaviour.
For companies using social media as a marketing channel, this case raises questions about long-term platform stability. Ongoing legal challenges and potential regulation could alter how these platforms operate, how audiences engage, and how data is used, particularly if engagement-driven features are restricted or redesigned.
For the largest platforms, this may ultimately lead to more fundamental changes in how products are designed, especially if courts or regulators begin to place limits on features that are closely linked to prolonged user engagement.
It seems now that accountability is expanding across the sector, and both platform providers and the businesses that rely on them will need to adapt to a landscape where design decisions, not just content, are subject to legal and regulatory scrutiny.
What Happens When Robotaxis Break Down?
A series of incidents involving Waymo’s autonomous vehicles has highlighted what happens when driverless systems fail in complex real world situations and how much they still rely on human intervention to recover.
A Technology Built For The Road Meets The Unexpected
Waymo’s robotaxi service has expanded rapidly across multiple US cities, now delivering hundreds of thousands of paid rides each week. The company positions its system as a fully autonomous driving service, designed to operate without a human driver behind the wheel.
However, recent incidents show that when situations fall outside expected conditions, vehicles can struggle to respond. In several reported cases, Waymo vehicles have stopped, hesitated or behaved unpredictably during emergencies, requiring intervention from police officers or other first responders.
One widely reported example from August 2025 involved a highway fire in California, where traffic was redirected in an unusual way. A Waymo vehicle was unable to adapt to the change, eventually stopping and requiring a police officer to manually move it out of the way.
When Autonomous Vehicles Cannot Proceed
The most significant issue here seems to be what happens when the system cannot decide what to do next.
Autonomous vehicles are designed to prioritise safety, which often means stopping when uncertainty is too high. While this reduces the risk of collisions, it can create new problems, particularly in fast-moving or emergency situations where standing still is not a viable option.
In multiple incidents, it seems that autonomous vehicles have effectively become obstacles in live environments, blocking traffic or delaying access for emergency services until human intervention takes place.
Human Support As The Fallback
To manage these situations, Waymo relies on human support systems behind the scenes.
The company uses Remote Assistance teams who provide contextual guidance when the vehicle encounters something it cannot resolve. According to Waymo, these workers do not drive the vehicle. Instead, they support decision-making. As the company explains, Remote Assistance agents “provide advice and support to the [vehicle] but do not directly control, steer, or drive the vehicle.”
This model is designed to ensure that the automated system remains in control at all times. However, it also means that when the system reaches its limits, recovery can depend on how effectively this human support is integrated.
Where Things Can Go Wrong
Even with this support in place, errors can still occur. For example, in one case under investigation in Austin, Texas, in January this year, a Waymo vehicle approached a stopped school bus with warning lights active. The system requested input from a remote assistant, who it is alleged incorrectly confirmed it was safe to proceed. The vehicle then moved past the bus while children were boarding, an action that would normally be illegal for a human driver.
Other reported incidents show a different type of failure, where no safe path is identified at all. In these cases, vehicles have remained stationary until physically moved, sometimes by police or other first responders.
All this has led to local officials raising concerns that this places an unexpected burden on public services. For example, in San Francisco, emergency management leaders warned that responders were becoming a default support function for autonomous vehicles, something they described as unsustainable.
Scaling The Problem Alongside The Technology
It seems that these challenges are becoming more visible as Waymo scales its operations.
The company operates thousands of vehicles and is expanding into new cities, increasing the number of unpredictable environments its systems must handle. It has said that around 70 Remote Assistance agents support a fleet delivering more than 400,000 rides per week.
In its response to US lawmakers, Waymo reiterated that Remote Assistance is limited in scope, stating that agents “provide advice only when requested by the automated driving system on an event-driven basis” and do not take control of the vehicle.
As deployment grows, the question is not whether incidents will occur, but how frequently and how effectively they can be resolved without external intervention.
Balancing Autonomy With Accountability
Waymo maintains that its system is designed to prioritise safety, even if that means stopping when conditions are unclear. The vehicle can also ignore human input if it conflicts with its own assessment, reinforcing that it remains the primary decision maker.
The company also states that “Waymo’s service does not rely on remote drivers,” emphasising that human involvement is limited and controlled.
However, the pattern of real world incidents seems to suggest that full autonomy still depends on multiple layers of human support. When those layers are not sufficient, responsibility can extend beyond the company itself to public infrastructure and emergency services.
What Does This Mean For Your Business?
For UK businesses, this highlights a critical aspect of automation that is often overlooked, namely what happens when systems fail or reach their limits.
Autonomous technologies are not just defined by how they perform under normal conditions, but by how they behave when they cannot proceed. Stopping safely is one outcome, but in operational environments, recovery is just as important.
It seems that human oversight, fallback processes and clear responsibility models remain essential. Businesses adopting automation will, therefore, need to plan not only for success scenarios, but also for failure scenarios, including how issues are resolved quickly and safely.
There is also a wider accountability question here. When automated systems interact with public environments, any gaps in ownership can become visible very quickly.
The Waymo case shows that the real test of autonomous systems is not when everything works, but how they respond when it doesn’t.
New Nail Polish That Works On Touchscreens
A new chemistry breakthrough could allow people to use long fingernails on touchscreens, addressing a long-standing usability issue with modern devices.
Why Fingernails Don’t Work On Touchscreens
Most modern smartphones and tablets use capacitive touchscreens, which rely on tiny electrical fields across the surface of the display. When a conductive object, such as a fingertip, disrupts that field, the device registers a touch.
Fingernails, however, are not conductive. This means taps made with the nail itself are not recognised, forcing users to adjust how they interact with devices. For people with long nails, this often results in awkward movements or reduced accuracy.
The issue is actually more widespread than it first appears. It also affects individuals with heavily calloused skin, where reduced conductivity can lead to unreliable touch response.
A Chemistry Led Solution
The new approach has been developed by a student researcher working with a supervisor at Centenary College of Louisiana and presented at a meeting of the American Chemical Society.
The idea is simple in principle, i.e., to create a nail coating that allows fingernails to interact with a touchscreen in the same way as skin.
As part of the research, the team experimented with more than 50 additives across multiple nail polish formulations. Their goal was to find a combination that could introduce just enough electrical interaction to register a touch, without compromising safety or appearance.
The motivation for the work came from a real-world need. As the researchers noted, when they explored the problem, the response was immediate: “would a touchscreen-compatible nail be useful?” The answer, they said, was “a resounding ‘yes, please!’”
How The Nail Polish Actually Works
Rather than making the nail directly conductive in the traditional sense, the formulation works through a different mechanism.
The researchers identified two key ingredients, taurine, commonly found in dietary supplements, and ethanolamine, a simple organic compound. When combined in a specific way, these ingredients enable a small movement of electrical charge across the nail surface.
This is enough to create a change in capacitance, allowing the touchscreen to detect contact.
According to the researchers, “our final, clear polish could be put over any manicure or even bare nails,” meaning it could integrate easily into existing cosmetic routines while also offering a functional benefit.
Why Previous Attempts Fell Short
Earlier efforts to solve this problem typically relied on adding conductive materials such as carbon nanotubes or metallic particles to nail polish.
While effective, these approaches introduced some practical challenges. For example, some materials raised safety concerns during manufacturing, while others limited the range of colours available, often resulting in dark or metallic finishes that were not commercially appealing.
The new approach avoids these issues by using more familiar chemical compounds and aiming for a clear or near-clear finish. This makes it more compatible with current consumer expectations in the beauty market.
Still Early Days, But Technically Promising
Despite the progress, the formulation is not yet ready for commercial use.
The researchers report that current versions require a relatively thick application and can feel slightly gritty. Current performance is also limited, with the conductive effect lasting only a short period once applied. The researchers say they are aiming to extend this to a more practical timeframe of several days.
There are also considerations around ingredient safety, particularly with ethanolamine, which can act as a skin irritant. The team is continuing to refine the formula to improve both durability and usability.
As the researchers themselves acknowledge, “we’re doing the hard work of finding things that don’t work, and eventually, if you do that long enough, you find something that does.”
What This Means Beyond Nail Polish
While this may appear to be a niche innovation, it highlights a broader trend in product development. Small usability challenges, particularly those affecting large numbers of people, are increasingly being addressed through interdisciplinary approaches that combine chemistry, materials science and user experience design.
There is also a clear commercial angle here. The involvement of cosmetic chemistry and early industry interest suggests potential applications within the beauty sector, particularly if the product can be refined to meet consumer expectations around appearance and durability.
More broadly, it could be said to demonstrate how relatively simple chemical solutions can improve how people interact with everyday technology, without requiring changes to the devices themselves.
What Does This Mean For Your Business?
For businesses, this development is a reminder that user experience challenges often sit at the intersection of technology and human behaviour.
Opportunities can emerge not just from building new digital tools, but from improving how people interact with the ones they already use. Even small friction points, when addressed effectively, can create meaningful differentiation.
It also highlights the value of early-stage research. Innovations like this may begin as academic projects, but can quickly attract commercial interest if they solve a genuine problem in a scalable way.
Organisations that stay aware of these developments, particularly in adjacent industries, may be more likely to spot practical innovations that improve usability, accessibility and customer experience.
OpenAI Shuts Down Sora App
OpenAI has closed its Sora video generation app just months after launch, highlighting a gap between technical capability and sustained user demand.
What Happened?
OpenAI has confirmed it is shutting down both the Sora consumer app and its associated web platform, bringing an end to its short-lived push into AI generated video as a social experience.
In a message shared on Twitter, the Sora team said: “We’re saying goodbye to Sora. To everyone who created with Sora, shared it, and built community around it: thank you.” The company added that “what you made with Sora mattered, and we know this news is disappointing,” signalling an orderly wind-down rather than a sudden withdrawal.
The decision also includes the end of OpenAI’s partnership with The Walt Disney Company, which had aimed to bring licensed characters into AI generated video.
A Strong Launch That Quickly Faded
Sora launched to significant attention, driven by its ability to generate realistic video and audio from simple text prompts. Early demonstrations suggested it could produce content that appeared close to professionally created footage.
Initial adoption reflected this interest. The app reached one million downloads faster than ChatGPT and climbed to the top of app store rankings within weeks of release.
However, that momentum didn’t last. Downloads declined sharply in the months following launch, with reports indicating a drop of more than 40 per cent by early 2026. User spending and engagement followed a similar pattern.
Despite millions of installs, the app generated relatively limited revenue, highlighting a disconnect between curiosity and long-term use.
Why Users Left
The central issue appears to have been retention rather than capability.
Sora offered impressive outputs, but struggled to establish itself as a daily habit. Its social feed, designed to showcase AI generated clips in a format similar to short-form video platforms, didn’t develop into a sustained engagement channel.
Concerns around misuse also played a role. For example, the platform faced criticism over deepfakes, non-consensual imagery and the use of copyrighted characters. These issues required tighter controls, which in turn reduced the flexibility that had initially driven interest.
At the same time, questions remained about the value of AI generated content without a clear human origin. Even where the visuals were convincing, it often lacked the context or meaning that drives engagement, and in some cases contributed to a wider sense of low-value, mass-produced content.
A Strategic Shift Away From Creative Tools
OpenAI has said the decision to close Sora will now allow it to focus on other areas, particularly robotics and more practical AI applications.
The company is increasingly directing resources towards systems that can perform real-world tasks, as well as agent-based tools capable of acting with a degree of autonomy.
This reflects a broader recalibration, and it seems that while AI generated media attracted significant attention, it has proven harder to turn into a reliable product category with strong user retention and monetisation.
The closure also suggests that OpenAI is prioritising areas where AI can deliver measurable utility, rather than relying on novelty or entertainment value alone.
The Wider AI Market
Sora’s lifecycle offers a useful case study in how AI products are evaluated in practice. While the technology itself was widely seen as impressive, that alone wasn’t enough to sustain the platform. Adoption actually depends on whether users find ongoing value, not just initial interest, and products that fail to become part of regular workflows or habits are, therefore, unlikely to justify continued investment at scale.
The decision also highlights the growing importance of trust, safety and intellectual property in AI driven platforms. These factors can directly affect both user behaviour and commercial viability.
At the same time, competition in the AI video space continues to increase, with other platforms exploring similar capabilities. This suggests the technology itself will persist, even if specific products do not.
What Does This Mean For Your Business?
For UK businesses, this development underlines the importance of focusing on practical outcomes when evaluating AI tools.
Impressive demonstrations can generate interest, but long-term value depends on whether a solution improves productivity, reduces cost or enhances customer experience in a measurable way.
It also reinforces the need to consider governance and risk. Issues such as content ownership, misuse and regulatory compliance are likely to shape how AI tools can be deployed in real-world settings.
The fate of Sora is also a reminder that not every high-profile AI launch will translate into a successful product. Organisations that assess new technologies based on sustained usefulness, rather than initial hype, are more likely to make sound investment decisions as the AI landscape continues to evolve.
Company Check : Google Launches AI Dark Web Monitoring Tool
Google has introduced a Gemini-powered dark web intelligence service designed to help organisations identify real cyber threats faster by filtering vast volumes of online criminal activity into relevant, actionable insights.
What’s Been The Problem With Dark Web Monitoring?
Security teams have long relied on dark web monitoring tools to detect leaked data, stolen credentials and early signs of attack activity. These tools typically scan forums and marketplaces using keywords linked to a company’s name, domains or assets.
The problem is not a lack of data, but the opposite. Most tools generate large volumes of alerts, many of which are irrelevant or duplicated, creating a high level of noise that slows down response times.
Google has highlighted this issue directly, noting that “most threat intelligence teams have plenty of data, as they’re inundated with thousands of false positives that can all too easily obscure the threats that matter most.”
How Gemini Changes The Approach
The new capability, delivered through Google Threat Intelligence, Google’s enterprise platform for tracking and analysing cyber threats, uses Gemini to analyse millions of dark web events each day and identify those that are relevant to a specific organisation.
Instead of relying on static keywords, the system builds a dynamic profile of a business, including its operations, structure and digital footprint. This allows it to detect threats even when attackers avoid naming a target directly.
Google explained that the system “uses Gemini to autonomously build an organisational profile that is specific to your business operations and mission,” enabling it to adapt as the organisation changes over time.
From Alerts To Context And Explanation
A key difference in this approach is the shift from raw alerts to what Google describes as “reasoned answers.”
For example, rather than simply flagging suspicious activity, the system explains why a particular event matters and how it connects to the organisation. This is designed to help security teams make faster, more informed decisions without needing to manually investigate every signal.
Internal testing suggests the platform can analyse millions of external events daily with up to 98 per cent accuracy, significantly reducing false positives compared to traditional tools.
Responding To An AI Driven Threat Landscape
The launch reflects a broader change in cybersecurity. Attackers are increasingly using AI tools to research targets, identify vulnerabilities and craft more convincing phishing campaigns.
This creates a situation where defensive tools must operate at similar speed and scale. Google has positioned its new service as a way to give security teams an advantage in what it describes as an increasingly automated threat environment.
The company said the goal is to “translate vast dark web data into precise, relevant insights delivered at the speed of AI,” helping organisations act earlier in the attack lifecycle.
A Push Towards Automated Security
The dark web monitoring service is one element of a wider strategy focused on what Google calls agent-driven security operations.
Alongside this launch, the company is introducing AI agents that can investigate alerts, gather evidence and provide verdicts within security workflows. This reflects a move away from manual analysis towards more automated, intelligence-led defence.
At the same time, Google has stepped back from consumer-focused dark web tools, instead prioritising enterprise systems that provide clearer and more actionable outputs.
What Does This Mean For Your Business?
For UK businesses, this signals a change in how cyber threats are detected and prioritised.
Traditional monitoring approaches that rely on keywords and manual analysis are likely to become less effective as attackers adapt and avoid obvious identifiers. Systems that can understand context and connect indirect signals will become increasingly important.
There is also a clear operational benefit. Reducing false positives and focusing on relevant threats can help security teams respond faster and use resources more efficiently, particularly for organisations without large in-house teams.
However, reliance on AI-driven intelligence also introduces new considerations around trust, oversight and data handling. Businesses will need to ensure they understand how these systems make decisions and how sensitive information is used within them.
It seems that cybersecurity is increasingly moving towards automated, context-aware systems that operate at scale, and organisations that adopt these capabilities early will be better positioned to keep pace with increasingly sophisticated threats.
Security Stop-Press : Companies House Glitch Raises Data Exposure Concerns
A technical issue on the UK’s company register may have exposed personal data linked to millions of businesses.
The problem affected Companies House, which holds records for over five million UK firms. A system fault reportedly allowed certain details, such as names and contact information, to be accessed or surfaced in unintended ways.
Companies House said it has fixed the issue and is investigating, though the full scale of exposure remains unclear. The incident adds to ongoing concerns about how publicly available company data can be misused, particularly when combined with other sources.
For businesses, the key step is to review what information is publicly listed, ensure it is accurate, and remain cautious of unsolicited contact referencing company data. Monitoring for unusual activity and strengthening verification processes can help reduce risk.