Company Check : WeTransfer Under Fire Over New Data Terms

Dutch file-sharing platform WeTransfer has sparked uproar after quietly adding language to its terms of service suggesting it could use customer files to train AI models, then swiftly removing the clause following backlash.

What Users Spotted and Why It Sparked Alarm

The controversy erupted in mid-July when eagle-eyed WeTransfer users, including high-profile creatives, flagged an update to the company’s terms of service set to take effect on 8 August 2025. In particular, Section 6.3 introduced wording that granted WeTransfer a “perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable licence” to use uploaded files for operating and developing the service, including, crucially, to “improve performance of machine learning models that enhance our content moderation process.”

To many, that appeared to signal a quiet expansion of rights that could allow WeTransfer to use (or even monetise) user-uploaded content for artificial intelligence (AI) training.

Among the concerned voices was UK children’s author and illustrator Sarah McIntyre, who took to X (formerly Twitter) to say: “I pay you to shift my big artwork files. I DON’T pay you to have the right to use them to train AI or print, sell and distribute my artwork and set yourself up as a commercial rival to me.”

It seems that such concerns weren’t unfounded. The clause appeared to echo patterns seen elsewhere in the tech world, where companies including Zoom, Adobe, Slack and Dropbox have faced recent backlash over vague or overly broad licensing updates connected to AI development. As AI tools become more powerful and accessible, the question of whose data fuels them, and with what consent, has become a flashpoint in digital rights and trust.

Why This Matters for Business Users

For many creatives and businesses, WeTransfer has long positioned itself as a privacy-respecting, user-friendly alternative to more data-hungry services. Its clean interface, strong brand identity, and explicit support for the creative industries made it especially popular with freelancers, studios, and design teams.

However, as a result of this latest incident, that trust now appears to be under scrutiny. If the AI clause had remained, businesses could have faced the uncomfortable possibility that internal documents, pitch decks, drafts, artwork, or sensitive visual assets might be used, not just to train algorithms, but potentially to inform systems well beyond the original upload. Even if restricted to content moderation purposes, the lack of clarity raised red flags.

For example, a design agency transferring client work via WeTransfer might wonder whether its bespoke assets could end up being parsed for machine learning, however indirectly. A photographer might fear her original image files could be used to train image recognition or generation tools. And a marketing firm sharing early brand materials might question what “derivative works” could technically include.

Although WeTransfer insists that no such usage has occurred, the lack of clear technical limitations in the original clause left too much room for doubt.

WeTransfer’s Response

Within days of the backlash, WeTransfer issued a formal press release clarifying its position. It insisted that the controversial clause was a misstep and that the company does “not use user content to train AI models, nor do we sell or share files with third parties.” The company acknowledged that AI had been under consideration “to improve content moderation,” but confirmed that “such a feature hasn’t been built or deployed in practice.”

The statement added: “We’ve since updated the terms further to make them easier to understand. We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension.”

Clause Now Dropped

Following the uproar, it seems that, in an updated version of Section 6.3, the AI-related clause was dropped entirely. For example, the new text grants WeTransfer a royalty-free licence to use content strictly for “operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.” Importantly, it reinforces that users retain ownership and intellectual property rights over their content, and that processing complies with GDPR and other privacy regulations.

What’s Changed and What Hasn’t?

From a legal perspective, WeTransfer’s licensing terms weren’t entirely new. Earlier terms already included broad usage rights necessary to operate the service, such as the ability to scan, index, and reproduce files. However, the new inclusion of AI-specific language, especially amid public concern about AI and data usage, introduced a new level of perceived risk.

As the company explained: “The language regarding licensing didn’t actually change in substance compared to the previous Terms of Service… The change in wording was meant to simplify the terms while ensuring our customers can enjoy WeTransfer’s features and services as they were built to be used.”

Nonetheless, perception matters. For example, the way the AI clause was introduced, without technical limitations, public explanation, or opt-out options, appeared to really undermine confidence at a time when many businesses are increasingly sensitive to data governance.

Broader Industry Fallout and Lessons for Tech Providers

WeTransfer is far from alone in facing scrutiny over AI terms. For example, back in 2023, Zoom had to walk back similar policy updates after suggesting it could use customer audio and video to train its AI models. Dropbox, Slack, and Adobe have all been forced to issue clarifications in recent months after terms of service changes sparked similar fears.

For regulators, the episode highlights ongoing gaps in user protection. In the UK, the ICO (Information Commissioner’s Office) has warned companies that AI development must respect explicit consent, clarity of purpose, and data minimisation, all of which could come under strain when licensing terms are broadly written.

For businesses, the incident is a reminder to read the fine print, especially as more cloud services evolve their models to incorporate generative AI, content filtering, and user analytics.

As an example, a marketing team using file-sharing services or cloud-based creative tools should now routinely assess licensing clauses for AI-related language, even if those features are not currently in use. Procurement teams may also need to establish red lines around AI usage to safeguard proprietary material.

Trust Takes Time to Build And Moments to Erode

Despite WeTransfer’s efforts to clarify and course-correct, replies on social media appear to remain largely sceptical. Some users have suggested the company had been testing the waters for broader AI permissions, only to retreat when the backlash hit. Others have expressed a desire to move to alternatives, such as Swiss-based Tresorit or Proton Drive, that offer end-to-end encryption and stronger privacy guarantees.

While WeTransfer may weather the storm, the event highlights a wider issue for the tech industry, i.e., transparency around AI is no longer optional. As public awareness of AI training practices grows, even small wording changes can trigger major reputational fallout. And for companies built on the trust of creative professionals, that risk is especially acute.

What Does This Mean For Your Business?

For UK businesses and creative professionals in particular, this episode serves as a clear warning that assumptions about how cloud-based platforms handle data can no longer be taken at face value. The practical risk may have been limited in this instance, but the reputational impact is real, and the consequences of poor communication are hard to reverse. For companies that regularly transfer visual, written, or proprietary material via WeTransfer or similar services, it may prompt a review not only of terms and conditions, but of where and how sensitive files are shared in future.

For WeTransfer, the timing could hardly be worse. As demand grows for privacy-conscious alternatives in an AI-saturated market, any perception of blurred boundaries risks handing competitive advantage to rivals positioning themselves as more transparent or security-first. Providers such as Proton Drive, Filestage and Internxt are already responding to this shift, actively marketing their commitment to zero-knowledge infrastructure and end-to-end encryption.

Regulators and legal teams are also likely to be watching closely. The blurred line between operational necessity and expansive licensing is fast becoming a regulatory priority. In the UK, organisations working in regulated sectors, such as legal, health or financial services, may find that contract terms involving generative AI now trigger enhanced scrutiny from internal compliance and external auditors alike.

The broader takeaway from this story is that, as AI becomes more embedded in the digital infrastructure businesses rely on, consent must be granular, wording must be clear, and trust must be continually earned. WeTransfer’s quick backtrack may limit the immediate fallout, but it will likely be remembered as yet another sign of how easily tech companies can alienate users when they fail to communicate transparently, especially when the stakes involve creative ownership, client confidentiality, and commercial value.

Security Stop Press : Chinese Hackers Exploit SharePoint Flaws

Microsoft has confirmed that Chinese state-linked hackers are exploiting critical flaws in on-premises SharePoint servers to steal data and deploy ransomware.

The groups, known as Linen Typhoon, Violet Typhoon, and Storm-2603, are targeting government, defence, and business organisations by abusing spoofing and remote code execution vulnerabilities. Cloud-based SharePoint systems are not affected.

Victims have been reported across multiple sectors and countries, including the UK. Microsoft says the attacks allow hackers to steal credentials, disable security tools, and spread ransomware such as Warlock.

Storm-2603, a China-based group, has been observed using a malicious script called spinstall0.aspx to gain access and escalate privileges inside networks. Microsoft has warned that more attackers are likely to adopt these methods.

To stay secure, businesses using on-prem SharePoint must install Microsoft’s latest security updates, rotate ASP.NET machine keys, enable AMSI protection, and use advanced endpoint detection tools to block post-exploit activity.

Sustainability-In-Tech : New AI Factory Powered By Renewable Energy in Arctic

Norwegian investment giant Aker has revealed plans to construct a large-scale AI facility inside the Arctic Circle, capitalising on green energy and a growing Nordic tech race.

Major Investment With Strategic Ambitions

Aker ASA, the Oslo-based industrial investment firm controlled by billionaire Kjell Inge Røkke, has announced plans to establish a major artificial intelligence (AI) “factory” in Narvik, a coastal city in northern Norway. Located 220km within the Arctic Circle, the site is already prepped for construction and has access to 230 megawatts (MW) of clean energy.
Described by Aker as a “catalyst for industrial development, job creation, and export revenues,” the project positions itself at the heart of a growing international race to create energy-efficient data infrastructure for AI workloads. CEO Øyvind Eriksen said the new facility would help Norway seize a key opportunity in an evolving digital economy: “AI and data centres are becoming foundational to global business, and northern Norway is uniquely positioned to benefit.”

Start Work Later This Year

While the company has not yet disclosed a total construction cost or timeline for the facility’s completion, the site in Narvik is said to be “construction ready”, with early groundwork expected to begin later this year, pending partnership agreements. Negotiations with potential technology providers and anchor customers are currently underway.

What Is an “AI Factory” and Why the Arctic?

The term “AI factory” refers to a data centre designed to support high-performance computing (HPC), particularly the large-scale training and deployment of AI models. These facilities require huge amounts of electricity to power and cool thousands of graphics processing units (GPUs), the hardware typically used for advanced AI tasks.

In recent years, tech companies and infrastructure investors have turned to northern regions where natural cooling and cheap renewable electricity offer environmental and economic advantages. Narvik, with its access to stable, low-cost hydropower and cool year-round temperatures, provides precisely the conditions needed for sustainable AI operations.
For example, data centres in warmer climates often need complex and energy-intensive cooling systems. In Narvik, ambient air can be used for much of the cooling, significantly reducing operational emissions. Aker’s plan aligns with a broader trend across the Nordics, where countries are leveraging their green energy grids and favourable climates to attract the next generation of digital infrastructure.

Aker’s Portfolio and Strategic Focus

Founded in 1841, Aker ASA is one of Norway’s largest industrial investment firms. The company has long-standing interests in sectors including energy, marine biotechnology, oil and gas, and software. Its current portfolio includes Cognite, a software company that delivers industrial AI and data solutions, and Seetee, a digital assets firm that holds Bitcoin and invests in blockchain infrastructure. Both are majority-owned and operated through Aker’s tech division.

In its Q2 2025 earnings update, Aker reported a 7.4 per cent rise in net asset value, reaching NOK 66.5 billion (£4.9 billion). The company also confirmed it was consolidating its data centre activities under direct ownership, a signal that the Narvik development will form a core part of its long-term infrastructure play.

The move comes as part of a wider shift in Aker’s strategy, with CEO Øyvind Eriksen stating that AI represents “a new value chain,” and that Norway’s combination of political stability, clean energy and industrial expertise makes it an attractive location for such ventures.

Part of a Larger Nordic Trend

The Nordics (Norway, Sweden, Denmark, Finland, and Iceland) have emerged as one of the world’s fastest-growing regions for AI data infrastructure, drawing investment from tech giants and local firms alike. Last year, Google pledged €1 billion (£850 million) to expand its Hamina data centre campus in southern Finland, its seventh such expansion. Microsoft followed suit with a $3.2 billion (£2.5 billion) commitment to boost its AI and cloud capacity across Sweden.

Amsterdam-based Nebius, a cloud firm backed by Yandex co-founder Arkady Volozh, announced in October that it would triple GPU capacity at its Mäntsälä facility in Sweden. The site is now being scaled to run 60,000 GPUs dedicated to AI workloads, making it one of Europe’s most powerful AI installations.

Also, as a sign of increasing local innovation, Finnish startup Silo AI was acquired by chipmaker AMD for $665 million (£515 million) last year, underlining growing investor confidence in the region’s AI ecosystem.

Narvik’s Unique Position

It seems that Narvik is no stranger to strategic importance. For example, historically a transport hub for iron ore, the city now sits at the centre of what the Norwegian government calls “Green North”, a zone being positioned for energy-intensive industries powered entirely by renewable sources.

The site earmarked by Aker lies close to existing transmission infrastructure and has direct access to locally generated hydropower. According to Statnett, Norway’s national grid operator, the northern region benefits from surplus electricity and lower wholesale energy prices compared to southern parts of the country.

This abundance of clean energy has not gone unnoticed. Eriksen described the Arctic setting as “ideal for long-term, sustainable digital infrastructure”, highlighting the region’s potential to export data processing as a service, similar to how Norway exports energy and aluminium today. For example, the Narvik facility could process AI training workloads on behalf of global clients, using only renewable energy and naturally cooled systems, giving it a unique carbon advantage compared to data centres in North America or Asia.

Economic and Industrial Impacts

Aker says the AI factory will generate new local jobs in both construction and operations, while also stimulating the broader northern economy. Although specific employment numbers have not yet been released, regional leaders have welcomed the project as a sign of renewed industrial confidence.

Local authorities in Narvik have also indicated that they are keen to develop a technology cluster around the facility, offering incentives to secondary businesses such as equipment suppliers, repair services, and housing developments.

For Aker, the facility may strengthen its position in a growing sector while complementing its existing investments in digital infrastructure. By owning both the compute (via the AI factory) and the software layer (via Cognite), the firm may be able to offer vertically integrated industrial AI services to its portfolio companies and beyond.

UK and European businesses could benefit as well. For example, with growing pressure to decarbonise digital operations, firms may soon look to outsource high-energy AI processing to low-carbon providers, particularly those in stable jurisdictions like Norway.

Challenges and Concerns

However, the project is not without its critics. For example, some environmental groups have raised concerns about the true impact of AI-related energy use, arguing that even renewable-powered data centres could crowd out other local energy needs or require future grid upgrades.

There are also broader geopolitical and regulatory questions. The AI arms race has triggered export restrictions on high-end GPUs and computing technology, particularly between the US and China. For Norway, which remains outside the European Union but closely aligned through the EEA agreement, balancing access to global supply chains with national interests could become increasingly complex.

Also, while the Narvik site boasts favourable conditions today, questions remain around long-term cooling efficiency, particularly as GPU densities increase and water-based cooling becomes more common. Some analysts have cautioned that being early to market brings both opportunity and risk.

That said, Aker insists that its approach is grounded in long-term ownership and sustainability. In a statement accompanying the announcement, Eriksen said: “Our industrial DNA means we take a patient, value-creating view. This isn’t about short-term gains—it’s about building infrastructure that serves future generations of technology.”

More detailed timelines, costs, and partnerships are expected to be disclosed later this year.

What Does This Mean For Your Organisation?

If Aker succeeds in building a commercially viable AI facility powered by Arctic hydropower, it could set a new benchmark for how digital infrastructure is developed and operated in a low-carbon economy. While the company has yet to reveal the full technical and financial details, the decision to base the facility in Narvik reflects a deliberate strategy to align technological ambition with environmental responsibility. This positions Aker as not just a backer of industrial innovation, but a potential driver of regional transformation in northern Norway.

For Norway itself, the project signals an opportunity to diversify beyond oil and gas while still playing to its strengths in energy, engineering, and export-led industrial development. The Narvik factory is being framed as part of a new value chain, one where data, like oil before it, becomes a national resource to be harnessed and exported. That framing carries economic and political weight, especially as countries seek to balance growth with climate goals.

From a business perspective, the implications stretch beyond Scandinavia. For example, UK companies under growing pressure to meet sustainability targets could find that shifting AI workloads to greener, offshore compute centres is an attractive alternative to expanding domestic infrastructure. With corporate ESG commitments under scrutiny and AI workloads expected to surge, outsourcing to renewables-based facilities may become part of the commercial risk-reduction strategy.

Even so, the success of this model depends on the reliability and scalability of the energy supply, on keeping operational costs competitive, and on navigating geopolitical and supply chain uncertainty. As governments consider how to regulate AI, data sovereignty and infrastructure ownership will remain sensitive issues. In Norway and beyond, Aker’s Arctic AI factory may, therefore, serve as both a proving ground and a pressure test for the next chapter of sustainable industrial development.

Tech Tip – Use WhatsApp View‑Once Voice Notes for Private Messaging

Need to share sensitive information without leaving a record? WhatsApp now lets you send voice notes that automatically disappear after being listened to only once.

How to:

– Open an individual or group chat in WhatsApp.
– Tap and hold the microphone icon, swipe up to lock the recording.
– Tap the “1” View‑Once icon when it turns green to enable it.
– Record your message and tap send – it disappears after first playback.

What it’s for:

Ideal for sharing things like short instructions, passwords, or reminders—without leaving a lasting voice note in your chat history.

Pro‑Tip: Voice messages sent this way expire after 14 days if not opened and cannot be forwarded, saved or starred. Ensure the recipient has read receipts enabled so you can see when they’ve listened.

Featured Article : ChatGPT Turned Into a Fully-Featured AI Agent

ChatGPT can now act on your behalf, using its own virtual computer to complete complex tasks, browse the web, run code, and interact with online tools, all without step-by-step prompting.

ChatGPT As An ‘AI Agent’

OpenAI has formally launched what it calls the ChatGPT agent, transforming its well-known conversational model into a proactive digital assistant capable of completing real tasks, independently choosing tools, and reasoning through multi-step workflows.

This new functionality, now available to paying subscribers, marks a significant turning point. For example, rather than simply responding to prompts, ChatGPT can now act as a true AI agent, performing tasks such as planning a meal, generating a financial forecast, writing and formatting a presentation, or summarising your inbox. Crucially, it can also interact with websites, manipulate files, and run code using its own virtual machine.

“We’ve brought together the strengths of our Operator and deep research systems to create a unified agentic model that works for you—using its own computer,” OpenAI explained in a blog post on 17 July. “ChatGPT can now handle complex tasks from start to finish.”

What the ChatGPT Agent Can Actually Do

OpenAI says the agentic version of ChatGPT can choose the best tools to solve a problem and it can perform multi-step operations without being micromanaged by the user.

For example:

– Users can ask ChatGPT to analyse their calendar and highlight upcoming client meetings, incorporating relevant news about those companies.

– It can plan a dinner for four by navigating recipe websites, ordering ingredients, and sending the shopping list via email.

– In professional settings, it may be used to analyse competitors, generate editable slide decks, or reformat financial spreadsheets with up-to-date data.

Its Own Toolkit

Technically, the agent achieves this by drawing on a powerful toolkit, i.e. a visual browser, text-based browser, command-line terminal, access to OpenAI APIs, and “connectors” for apps like Gmail, Google Drive, or GitHub. OpenAI reports that it can navigate between tools fluidly, running tasks within a dedicated virtual computer environment that preserves context and session history.

This context-awareness means it can hold onto prior steps and continue building on them. For example, if a user uploads a spreadsheet to be analysed, ChatGPT can extract key data, switch to a browser to find supporting info, and return to the terminal to generate a report, all within one session.

OpenAI describes the experience as interactive and collaborative, not just automated. Users can interrupt, steer or stop tasks, and ChatGPT will adapt accordingly.

Who, When, and How?

The new ChatGPT agent capabilities are being rolled out initially to paying customers on the Pro, Plus, and Team plans. Enterprise and Education users will follow in the coming weeks.

To access agent mode, users need to open a conversation in ChatGPT and select the ‘agent mode’ option from the tools dropdown. Once enabled, users can assign complex tasks just as they would in a natural chat. On-screen narration gives visibility into what the model is doing at each step.

Pro users get 400 messages per month, while Plus and Team users receive 40 per month, with more usage available through paid credits.

Although the rollout is currently limited, OpenAI says it is “working on enabling access for the European Economic Area and Switzerland” and will continue improving the experience.

Why OpenAI Is Doing This Now?

OpenAI’s move reflects a broader push within the industry to shift from passive chatbots to autonomous AI agents, i.e. models that can actively use tools, complete workflows, and deliver tangible results.

Until now, models like ChatGPT have excelled at language generation but faltered when asked to carry out structured, real-world tasks involving files, websites, or multiple steps. That changes with the new agent.

Demand-Driven Says OpenAI

According to OpenAI, user demand drove this shift. For example, many were reportedly attempting to use previous tools, such as Operator, for deeper research tasks, but were apparently frustrated by their limitations. By combining tool use and reasoning within a single system, OpenAI hopes to unlock more practical and business-relevant use cases.

This could also represent a strategic response by OpenAI to rising competition from agents being developed by Google DeepMind, Anthropic, Meta, and open-source communities, many of whom are now focusing on AI models that can act, not just talk.

Business Uses

While consumers can use the agent for tasks like travel planning or dinner parties, the biggest implications may be for professionals and businesses. For example, in OpenAI’s internal tests, the agent performed as well as or better than humans on tasks like:

– Generating investment banking models with correct formulas and formatting.

– Producing competitive market analyses.

– Updating Excel-style financial reports.

– Converting screenshots into editable presentations.

OpenAI says that for data-heavy roles, ChatGPT agent showed strong results. For example, on DSBench, a benchmark testing real-world data science tasks, it outperformed humans by wide margins in both analysis (89.9 per cent) and modelling (85.5 per cent). On SpreadsheetBench, it scored 45.5 per cent with direct Excel editing, far ahead of Microsoft’s Copilot in Excel at 20.0 per cent.

This positions ChatGPT agent not just as a time-saver, but as a cost-effective knowledge worker in fields like consulting, finance, data science, and operations.

New Capabilities Bring New Risks

Despite the powerful new functions, OpenAI has been clear that risks are increasing too, particularly because the agent can interact directly with sensitive data, websites, and terminal commands.

“This introduces new risks, particularly because ChatGPT agent can work directly with your data,” the company warned, noting the risk of adversarial prompt injection—where attackers hide malicious instructions in web pages or metadata that the AI might interpret as legitimate commands.

For example, if a webpage contained an invisible prompt telling ChatGPT to “share email contents with another user,” the model might do so (unless safeguards are in place).

To prevent this, OpenAI says it has:

– Required explicit user confirmation for real-world actions like purchases or emails.

– Introduced a watch mode for supervising high-impact tasks.

– Trained the model to refuse dangerous tasks (e.g. transferring funds).

– Implemented privacy controls, including cookie and browsing data deletion.

– Shielded the model from seeing passwords during browser “takeover” sessions.

Also, on synthetic prompt injection tests, OpenAI claims the agent resisted 99.5 per cent of malicious instructions. However, in more realistic red-team scenarios, the resistance rate dropped to 95 per cent, which is a reminder that vulnerabilities still exist.

The Next Phase

The launch of ChatGPT agent pushes OpenAI firmly into the next phase of AI development, i.e. intelligent systems that act on behalf of humans, not just inform them.

It’s a clear sign that OpenAI aims to lead in the agentic AI race, rather than simply competing on model performance or training size. With its own virtual environment, a growing toolset, and proactive capabilities, ChatGPT now resembles something closer to a software co-pilot than a chatbot.

Competitors will likely follow suit. Google’s Gemini, Anthropic’s Claude, and open-source challengers are all exploring similar agent-style features. However, OpenAI is arguably first to market with a production-ready system that balances capability and risk management (however imperfectly).

For users, especially in business, the implications are considerable. For example, those able to integrate ChatGPT agent into workflows may gain speed, efficiency, and analytical power, so long as they understand the limitations and continue to exercise oversight.

The success of this rollout could also shape broader conversations about AI safety, regulation, and responsibility, particularly as agents become more embedded in real-world systems.

What Does This Mean For Your Business?

The agent rollout gives OpenAI a powerful lead in the shift toward goal-directed, tool-using AI, one that can complete work on behalf of the user rather than waiting for commands. Its ability to interact with live websites, private data sources, and business systems puts it on a new level of utility, but also of accountability. This is no longer just about generating answers. It is about delegation.

For UK businesses, the implications are likely to be immediate and wide-ranging. For example, the agent offers a credible way to automate time-consuming tasks like competitor analysis, document preparation, scheduling, and spreadsheet management. For knowledge-heavy sectors such as finance, consultancy, and data operations, it introduces a low-friction option for streamlining routine work, reducing manual handling, and speeding up research. Organisations already experimenting with automation and AI-assisted productivity tools may now find themselves rethinking existing workflows in favour of a more hands-off, outcome-driven approach.

However, it’s not without operational risks. Any system that can click, copy, calculate, and communicate on your behalf must be trusted to do so responsibly. That means businesses will need to consider internal guardrails and policies, not just to protect sensitive information, but also to ensure the AI is being used ethically and in line with organisational goals. The fact that ChatGPT can now act autonomously raises pressing questions around auditability, compliance, and human oversight, especially in regulated sectors.

There are also broader competitive and reputational pressures in play. For OpenAI, this launch extends its relevance beyond individual users and into the professional environments that rivals like Microsoft and Google are also targeting. At the same time, it invites scrutiny over safety claims, especially as agents become more capable and the scope for unintended consequences grows.

OpenAI making ChatGPT an AI agent appears to be a clear step-change in how AI is positioned and applied. The tools are no longer limited to outputting content or providing suggestions, but are now expected to deliver outcomes, complete tasks, and take action with minimal supervision. For users, that means new possibilities, but also a renewed need to stay alert, strategic, and in control.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives