AI That Always Agrees May Be Harming Our Judgement
New research shows that leading AI systems frequently tell users they are right, and that this behaviour may be subtly weakening people’s ability to reflect, take responsibility, and repair relationships.
What The Research Found
A major study by Stanford researchers, published in Science, has found that sycophancy, i.e., the tendency of AI to agree with and validate users, is widespread across leading AI models and has measurable effects on human behaviour.
Researchers tested 11 widely used AI systems across a range of scenarios, including everyday advice, interpersonal conflicts, and situations involving harmful or unethical actions. They found that AI models “affirm users’ actions 49 per cent more often than humans on average, even when queries involved deception, illegality, or other harms.”
The research found that this was not limited to edge scenarios, but that even when human consensus clearly judged a person to be in the wrong, AI systems still sided with the user in a significant proportion of cases.
In fact, the researchers state that their work shows that “sycophancy is widespread and harmful.”
Why This Matters More Than It Sounds
At first glance, this behaviour may seem like a minor issue of tone or politeness. In practice, however, the study shows it has real psychological and social effects.
Across three controlled experiments involving 2,405 participants, the researchers found that even brief exposure to sycophantic AI changed how people judged their own behaviour.
As the paper explains, “even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.”
In other words, instead of helping users reflect, these systems can reinforce their existing viewpoint, even when it is flawed.
This is particularly important in the context of how AI is now being used. Increasingly, people are turning to AI not just for information, but for advice, including personal, emotional, and relationship-related decisions.
How AI Changes Human Behaviour
The research highlights a shift away from what might be called social friction, i.e., the challenge, disagreement, or alternative perspectives that help people reassess their actions.
Sycophantic AI removes much of that friction. Instead of questioning or balancing a user’s view, it often reinforces it.
The result is a measurable change in behaviour. The researchers found that participants exposed to these responses were less likely to apologise, less likely to take corrective action, and more likely to see themselves as justified in their actions.
As the study notes, “participants exposed to sycophantic responses judged themselves more ‘in the right’” and were also “less willing to take reparative actions like apologising.”
Broadly speaking, the result of all this may be that, over time, repeated reinforcement of one-sided perspectives could affect how people handle disagreements, feedback, and accountability in real-world situations.
Why The Problem Is Likely To Persist
One of the most significant findings is that users actually prefer this behaviour.
Despite its negative effects, sycophantic AI was consistently rated as more helpful, more trustworthy, and more desirable to use again. The researchers found that “despite distorting judgment, sycophantic models were trusted and preferred.”
This creates a difficult dynamic for AI developers. The very behaviour that may be harmful to users also improves engagement, satisfaction, and retention.
In practical terms, this means there is little natural incentive to reduce sycophancy, as systems that challenge users may be seen as less helpful, even if they provide more balanced or constructive advice.
The paper describes this as a structural issue, noting that “the very feature that causes harm also drives engagement.”
This seems to show a clear conflict at the heart of the problem.
A Wider Risk Beyond Vulnerable Users
Concerns around AI behaviour have often focused on vulnerable individuals, but this research suggests the issue is far more widespread.
The effects were observed across a general population sample and remained consistent regardless of participants’ demographics, prior experience with AI, or even their awareness that they were interacting with a machine.
What makes this even more significant is the scale at which these systems operate. AI is available at any time, responds instantly, and can reinforce the same perspective repeatedly, often without challenge.
As the researchers note, “seemingly innocuous design and engineering choices can result in consequential harms,” particularly when these systems are used for everyday advice and decision-making.
Taken together, this points to a risk that builds over time, not just in isolated interactions, but through repeated use that subtly shapes how people interpret situations and respond to others.
What Does This Mean For Your Business?
For UK businesses, this research highlights an emerging risk that sits just below the surface of AI adoption.
Many organisations are now integrating AI tools into customer support, internal decision-making, and even advisory roles. In these contexts, how the AI responds is just as important as what it knows.
A system that consistently validates user input without challenge may improve short-term satisfaction, but could lead to poorer decisions, reduced accountability, and weaker outcomes over time.
There is also a reputational dimension here. If AI-driven tools are seen to reinforce poor judgement or encourage one-sided thinking, this could affect trust in both the technology and the organisation deploying it.
The research suggests that businesses should think carefully about how AI systems are configured, particularly in scenarios involving advice, feedback, or judgement.
It also points towards a broader governance question. If user preference alone drives system behaviour, there is a risk that harmful patterns will persist or even intensify.
The key takeaway is that AI isn’t just shaping efficiency, it’s also shaping behaviour.
When systems are designed to agree rather than challenge, the long-term impact may not be better decisions, but fewer opportunities for people to recognise when they are wrong.
Company Check : SpaceX IPO Signals A New Phase Of Tech Power And Funding
It’s been reported that SpaceX has confidentially filed for what could be the largest IPO in history, with the timing and structure of the move suggesting this may be as much about funding pressure and strategic consolidation as it is about market opportunity.
What Has Been Reported?
Multiple sources (including Bloomberg and Reuters) have reported that Elon Musk’s SpaceX company has submitted draft IPO paperwork to the US Securities and Exchange Commission, with plans to raise between $40 billion and $75 billion. An IPO is when a company sells shares to the public for the first time to raise investment, effectively becoming a publicly listed company, similar to a plc in the UK.
Becoming One Of The Most Valuable Companies In The World
At the upper end, this would comfortably exceed Saudi Aramco’s record $29 billion listing and could value SpaceX at up to $1.75 trillion. That would place it among the most valuable companies in the world at the point of listing.
Confidential Filing
It’s been reported that the filing was made confidentially. This is actually quite a common approach that allows companies to receive regulatory feedback before publicly disclosing financial details. A listing could follow as early as June, depending on market conditions.
Why Is SpaceX Going Public Now?
For years, Elon Musk had suggested SpaceX would remain private until its long-term goals, particularly around Mars, were further advanced. That position now appears to have changed, and the most likely reason is financial rather than philosophical.
SpaceX is no longer just a launch provider. It is now a capital-intensive technology platform spanning satellite internet, heavy-lift rocketry, defence contracts, and artificial intelligence. That means each of these areas requires sustained, large-scale investment.
Starship development alone is expected to cost billions, while Starlink requires constant satellite replacement and expansion. On top of this, the integration of Musk’s AI company xAI introduces a further layer of cost, particularly given the expense of compute, data centres, and energy required to train and run large models.
As some analysts have noted, public markets offer access to capital at a scale private funding cannot easily match, which is likely to be what SpaceX needs to cover the huge costs of tech, infrastructure, and energy needed to scale up.
The Business Behind The Valuation
The strongest commercial foundation for the IPO is Starlink, which has become the most financially successful part of the business. Reports suggest it generated over $10 billion in revenue in 2025 with strong margins, driven by rapid global subscriber growth.
This matters because it provides a predictable, recurring revenue stream that investors can understand and value. In effect, Starlink transforms SpaceX from a project-driven aerospace company into something closer to a telecoms and infrastructure provider.
However, the business itself is becoming more complex. The recent merger with xAI, alongside the integration of the X platform, means SpaceX now operates across communications, AI, defence, and media, rather than being focused purely on space and satellites.
While this may strengthen the long-term strategic story, it also makes valuation more difficult. Some analysts have suggested the merger allows less mature or loss-making parts of the business to be supported by Starlink’s cash flow ahead of the IPO.
Governance And Market Scrutiny
Going public will bring a level of scrutiny that SpaceX has largely avoided as a private company. Quarterly reporting, audited financials, and shareholder accountability will become standard.
Conflicts Of Interest?
There are also broader governance questions. For example, the combination of multiple Musk-controlled companies into a single entity, along with his significant personal stake, raises some familiar concerns around decision-making and possible conflicts of interest.
These concerns are amplified by SpaceX’s role in government infrastructure. For example, the company holds major contracts with NASA and the US Department of Defense, and its Starlink network has become critical communications infrastructure in certain geopolitical situations.
The overlap between private commercial activity and public sector dependency is not new, but at this scale it becomes more visible and more relevant to investors.
Why The Structure Of The IPO Matters
One unusual reported feature is the intention to allocate a larger than normal proportion of shares to retail investors.
If confirmed, this would broaden access to the offering but may also create a shareholder base that is more aligned with Musk’s long-term vision and less focused on short-term governance challenges.
This approach echoes earlier tech IPOs that sought to balance institutional control with wider participation, though it can also reduce pressure from activist investors.
What Does This Mean For Your Business?
For UK businesses, the SpaceX IPO is less about space exploration and more about how modern infrastructure is being built and funded.
The company sits at the intersection of connectivity, defence, and AI, all areas that increasingly underpin day-to-day business operations. Its move to public markets reflects the scale of investment now required to compete in these sectors.
It also highlights a broader trend. The most influential technology platforms are no longer narrow products or services. They are integrated systems combining data, infrastructure, and intelligence, often across multiple industries.
From a risk and strategy perspective, this creates both opportunity and dependency. Businesses benefit from faster innovation and more capable platforms, but they also become more reliant on a smaller number of providers whose decisions are shaped by capital markets as much as technology.
There is also a lesson around scrutiny here. As companies grow in scale and importance, transparency becomes unavoidable. The shift from private to public ownership brings greater visibility, but also greater accountability.
In simple terms, this IPO is not just a milestone for SpaceX. It is a signal that the next phase of technology competition will be defined by access to capital, control of infrastructure, and the ability to operate at global scale.
Security Stop-Press : Tech Firms Declared Targets In Iran Conflict
Iran’s Revolutionary Guard has named 18 major US tech firms as “legitimate targets”, highlighting how commercial technology infrastructure is now being drawn directly into conflict.
The list includes Microsoft, Apple, Google, Nvidia, and Palantir, with Iran claiming that “American ICT and AI companies” are involved in identifying targets. It warned that “for every assassination… one facility… will face destruction,” and advised staff in the region to leave immediately.
This comes amid escalating military activity and increasing use of AI in intelligence and targeting systems.
It is notable that private tech infrastructure, including data centres and cloud platforms, is now being treated as part of the battlefield rather than separate from it.
For businesses, the advice is to review where data is hosted, assess regional exposure, and ensure backup, resilience, and supplier diversification plans are in place.
Sustainability-in-Tech : AI Datacentres May Heat Surrounding Areas For Miles
AI datacentres built to power the rapid expansion of artificial intelligence may also be creating measurable heat increases across surrounding areas, raising new concerns about their local environmental impact as well as their energy use.
New Research Findings
A 2026 study led by researchers affiliated with the University of Cambridge examined land surface temperature data around thousands of AI datacentre locations worldwide between 2004 and 2024.
Using satellite-derived temperature measurements and location data for AI hyperscale facilities, the researchers analysed how temperatures changed before and after sites became operational. Their findings suggest that the presence of large AI datacentres is associated with a noticeable increase in surrounding land surface temperatures.
The paper states that “the land surface temperature increases by 2°C on average after the start of operations of an AI data centre,” with recorded increases ranging from as little as 0.3°C to as much as 9.1°C in some locations.
The researchers describe this phenomenon as a new form of localised warming, referring to it as the “data heat island effect”, drawing a direct comparison with the well-established urban heat island effect seen in cities.
How Far The Effect Extends
One of the most significant aspects of the study is its claim that the warming effect extends well beyond the datacentre site itself.
The analysis suggests that temperature increases can even be detected up to 10 kilometres away from AI datacentres, although the intensity reduces with distance. According to the study, “an average monthly land surface temperature increase of 1°C can be measured up to 4.5 km from the AI hyperscalers”.
This places the scale of the effect in a similar range to traditional urban heat islands, where built environments and human activity create localised warming zones that affect surrounding areas.
The researchers argue that this spatial reach makes the phenomenon difficult to ignore when considering the broader environmental footprint of AI infrastructure.
Why Is This Happening?
At the core of the issue is energy consumption. For example, AI datacentres require vast amounts of electricity to train and run machine learning models, and a large proportion of that energy is ultimately released as heat. Cooling systems are designed to remove this heat from servers, but in doing so, it is transferred into the surrounding environment.
The paper notes that the rapid expansion of AI services is driving a surge in datacentre capacity and energy demand, stating that data processing could soon become one of the most power-intensive activities globally.
It also highlights a critical sustainability challenge, observing that “AI data centres are in the vast majority relying on fossil fuel use”, meaning that rising demand for AI computing could increase both emissions and localised heat output at the same time.
How Many People Could Be Affected?
The potential scale of impact is another key concern raised in the research. By combining temperature data with population mapping, the authors estimate that “more than 340 million people could be affected by this temperature increase” worldwide, particularly those living within several kilometres of large datacentre clusters.
They warn that, much like urban heat islands, this could have knock-on effects for “welfare, healthcare, and energy systems”, particularly in regions already experiencing rising temperatures or heat stress.
While these figures are based on modelling and assumptions rather than direct measurement of human exposure, they highlight the potential for AI infrastructure to influence local environments in ways that have not previously been considered.
Caveats And Limitations
Despite the striking findings, the study comes with some important limitations. For example, it has not yet been peer-reviewed, meaning its methodology and conclusions have not undergone full academic scrutiny. As with any preprint study, its results should, therefore, be treated as indicative rather than definitive.
There is also a key technical distinction in what is being measured. The study focuses on land surface temperature, which reflects how hot surfaces such as roofs, roads and ground materials become, rather than the air temperature experienced directly by people.
This means some of the observed warming may actually be linked to changes in land use, construction materials, and reduced vegetation around datacentre sites, rather than heat emissions from computing alone.
As a result, the findings are best viewed as evidence of a broader environmental effect associated with large-scale datacentre development, rather than as proof that AI processing itself is solely responsible for widespread temperature increases.
Where This Leaves AI Sustainability
The study does, however, seem to add a new dimension to the sustainability debate around AI. Whereas much of the focus to date has been on carbon emissions and electricity consumption, this research suggests that local environmental impacts, particularly heat, may also need to be considered as part of the overall footprint of AI infrastructure.
The authors themselves emphasise this point, stating that the data heat island effect “could have a remarkable influence on communities and regional welfare in the future” and should become part of the wider conversation around sustainable AI development.
They also point to potential mitigation strategies, including more energy-efficient hardware, improved cooling systems, and computational methods that reduce the energy required to train and run AI models.
What Does This Mean For Your Business?
For businesses, this is an early signal that AI infrastructure decisions are becoming more complex.
Organisations relying on AI services may soon face greater scrutiny over the environmental impact of their digital operations, particularly if sustainability reporting expands to include local effects as well as carbon emissions.
For those involved in property, planning, or infrastructure, the implications are more immediate. Large datacentre developments may need to be assessed not just in terms of energy supply and connectivity, but also their potential impact on local microclimates and surrounding communities.
At the same time, this challenge is already starting to create new opportunities. For example, several projects are exploring how waste heat from datacentres can be captured and reused rather than simply expelled into the environment. In the UK, government-backed initiatives have looked at using datacentre heat to supply district heating networks, helping to warm homes and public buildings. In Europe, schemes in countries such as Denmark and Sweden are already feeding excess heat from large datacentres into local heating systems, reducing both emissions and energy costs for nearby communities.
This means that, instead of being seen purely as energy-intensive assets, datacentres can become part of local energy ecosystems, supporting more efficient and circular use of heat. For businesses, this opens up practical opportunities around energy partnerships, sustainable building design, and participation in local heat networks.
For organisations planning new facilities, there is also a clear incentive to design with this in mind from the outset. Integrating heat recovery, selecting appropriate locations, and working with local authorities on energy reuse strategies could all become competitive advantages rather than regulatory burdens.
Broadly speaking, the research highlights an important point. AI may be digital, but the systems that power it are not. As demand for AI continues to grow, so too will the need to manage its physical footprint in a way that is sustainable, measurable, and commercially viable, not just environmentally responsible.
Video Update : How To Create Documents Using The New Copilot Word Agent
Microsoft’s Copilot in Word can turn a simple prompt into a complete document, and this video shows how it can quickly produce written content, structure it into clear sections and take care of the initial layout so you are not starting from scratch.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip : Use “Open In Browser” For Unknown Files Before Downloading
Many email and cloud platforms allow you to preview files in your browser, so opening unknown documents this way first is a simple way to reduce the risk of running harmful content on your device.
Why This Matters
Unexpected attachments are one of the most common ways malware and phishing attacks reach businesses.
Opening a file directly in a desktop application can allow embedded content, such as macros or scripts, to run if enabled.
Previewing a file in your browser, where supported, limits this behaviour and gives you a chance to assess the content before downloading it.
How To Preview Files In Microsoft 365
In Outlook on the web or OneDrive:
- Click on the attachment or file.
- Select ‘Preview’ or ‘Open in browser’.
- Review the content without downloading it.
Office files such as Word, Excel and PDFs will typically open in a web-based viewer.
How To Preview Files In Google Workspace
In Gmail or Google Drive:
- Click the attachment or file.
- Select ‘Preview’ (often shown as an eye icon).
- Review the file in the browser window.
You can then decide whether it is safe to download or open fully.
What To Watch For
Even when previewing files, be cautious of:
- Requests to enable editing or macros after download.
- Links inside documents that prompt further action.
- Files from unknown or unexpected senders.
If in doubt, verify with the sender before opening fully.
A Practical Approach
Use browser preview as a quick first step when dealing with unexpected files.
It only takes a moment and adds an extra layer of caution before opening content directly on your device, helping reduce the risk of accidental malware execution.