Microsoft Makes AI Agents in OneDrive Generally Available
Microsoft has made AI-powered Agents in OneDrive generally available, allowing users to create persistent Copilot assistants that work across multiple documents rather than individual files.
What Are AI Powered Agents?
AI-powered agents in OneDrive are persistent Copilot assistants built from a user selected set of files, designed to understand and work across multiple documents at once rather than responding to single file prompts. For example, instead of querying individual documents separately, users can now create an agent that draws exclusively on up to 20 chosen files, such as project plans, meeting notes, specifications or research material, and uses that content to answer questions, summarise decisions, identify risks, and surface key information while retaining context over time.
As Microsoft explains in its OneDrive announcement, “Rather than asking Copilot the same questions across individual files, you can now create an Agent that understands an entire set of documents, project plans, specs, meeting notes, research, or decks, and responds with answers grounded in your content.”
These agents are saved as .agent files in OneDrive and, when opened, an agent launches a full-screen Copilot experience that remains centred on the selected project or topic rather than switching context between files. This allows users and teams to interact with their own information in a more structured and continuous way, with agents appearing alongside documents, spreadsheets, and presentations in OneDrive.
Generally Available (Worldwide)
In this latest announcement, Microsoft has confirmed the general availability of these Agents in OneDrive. Agents are available worldwide on OneDrive on the web and require a Microsoft 365 Copilot licence.
The move forms part of Microsoft’s wider effort to embed AI more deeply into everyday productivity tools, with a focus on retaining context, reducing repetitive work, and improving how teams manage and interpret large volumes of information over time.
How Agents Are Created And Used
Agents can be created directly within OneDrive on the web without any additional administrative setup. For example, users can either select files and choose the option to create an agent from the toolbar or right-click menu, or start from the Create option and build an agent around uploaded content. During creation, users name the agent and can add optional instructions to guide how it responds.
Once created, agents behave like any other file in OneDrive. They can be searched for, filtered by file type, opened, renamed, and updated as projects change. Files can be added or removed from an agent, and instructions can be refined to reflect new priorities or information. Sharing works in the same way as other OneDrive files, with access dependent on whether collaborators already have permission to view the underlying source documents.
Microsoft says that this approach allows an agent to support collaboration without introducing additional complexity, noting that “The agent can provide complete, grounded responses keeping everyone aligned without extra handoffs.”
Why Microsoft Is Introducing OneDrive Agents?
The introduction of agents reflects Microsoft’s current view that AI tools need to move beyond one-off prompts and retain working context over time. In its OneDrive announcement, Microsoft says the feature is aimed at users who want Copilot to remember the context of a project, understand the documents a team already relies on, and answer recurring questions without retracing previous steps.
This aligns with Microsoft’s broader Copilot strategy across Microsoft 365, which increasingly focuses on task continuity, shared understanding, and collaborative workflows rather than isolated productivity gains. By anchoring AI interactions to a defined set of documents, Microsoft is attempting to make Copilot more predictable, more relevant, and easier to trust in day-to-day business use.
Who Are Agents For?
Agents in OneDrive are primarily targeted at business and professional users already working within the Microsoft 365 ecosystem. The requirement for a Microsoft 365 Copilot licence means the feature is positioned squarely at organisations that have already invested in Microsoft’s paid AI offering.
Microsoft has highlighted examples of use cases including project coordination, onboarding, meeting preparation, follow-up work, and research synthesis. In each case, the common challenge is information spread across multiple files and contributors, often over long periods of time. By allowing an agent to answer questions such as what decisions have been made so far or what risks keep recurring, Microsoft is positioning the feature as a way to reduce friction in collaborative work.
The Value For Business Users
For businesses, the practical value of OneDrive agents seems to lie in time savings and improved consistency. For example, teams no longer need to repeatedly summarise documents, re-explain project history to new participants, or manually cross-reference decisions across files. An agent can provide a consolidated view based entirely on approved internal content, which may help reduce misunderstandings and duplicated effort.
The design choice to limit agents to user selected files is also quite significant from a governance perspective. For example, unlike broader AI tools that may draw from large organisational data sets, OneDrive agents operate within clearly defined boundaries, which may make them easier to deploy in regulated or security conscious environments.
Implications For Microsoft And The Market
For Microsoft, the release strengthens OneDrive’s position as more than a passive storage service. By turning collections of files into interactive AI resources, Microsoft is attempting to make OneDrive a central workspace where information is not just stored but actively interpreted.
This move by Microsoft is also likely to place competitive pressure on other productivity platforms. For example, Google Workspace, Notion, and other collaboration tools are investing heavily in AI assisted document management, but Microsoft’s tight integration between OneDrive, Copilot, and Microsoft 365 gives it a structural advantage in enterprise environments already standardised on its software stack.
Limitations And Criticisms
Despite its potential, Agents in OneDrive are not without limitations. For example, the requirement for a Copilot licence may restrict access, particularly for smaller organisations or teams that have not yet justified the cost of Microsoft’s AI add-on. There are also practical limits, such as the cap of 20 files per agent, which may be restrictive for larger or more complex projects.
Governance and oversight are also important considerations here. For example, while agents only work with selected content, organisations still need clear policies around who can create agents, what material can be included, and how shared access is managed. AI-generated summaries and answers also require appropriate human oversight, particularly when used for decision making or compliance related work.
Microsoft has stated that user feedback will play a role in shaping future updates to the feature, suggesting that the current release represents an early but stable stage rather than a final form.
What Does This Mean For Your Business?
Making Agents generally available in OneDrive is the next step in Microsoft’s ongoing effort to make AI a persistent, context-aware part of everyday work rather than a tool used in isolated moments. By allowing Copilot to operate across a defined set of user selected documents, Microsoft is trying to address a common problem in modern workplaces where knowledge is fragmented across files, teams, and time. The focus on grounding responses in specific content rather than broad organisational data also reflects an attempt to balance usefulness with control, which remains a key concern for many organisations adopting AI at scale.
For UK businesses, the feature is likely to be most relevant in environments where projects involve multiple stakeholders, long timelines, and heavy documentation. For example, professional services firms, public sector teams, regulated industries, and growing SMEs already using Microsoft 365 may see some practical value in reducing time spent re-briefing colleagues, preparing meetings, or reconciling decisions spread across documents. That said, the requirement for a Copilot licence and the need for clear governance policies mean adoption is unlikely to be automatic, particularly for smaller organisations still assessing the return on AI investment.
For Microsoft, the general availability of OneDrive agents also reinforces its strategy of trying to shoehorn AI directly into its core productivity infrastructure wherever it can rather than offering it as a separate layer. For competitors, it may well raise expectations around how AI should handle shared context, continuity, and collaboration. For users, it introduces a more structured way to interact with their own information, while still requiring careful oversight to ensure AI outputs are used appropriately. Taken together, Agents in OneDrive show how AI is gradually being normalised within everyday work, with tangible benefits emerging alongside new operational and governance considerations.
Company Check : Moltbook And The Risks Of AI Agents Interacting Online
Moltbook, a newly launched social platform designed for AI agents rather than humans, has drawn scrutiny after researchers exposed major security flaws and raised questions about how autonomous its AI activity really is.
A Platform For ‘Agents’
Moltbook is presented as a social network designed specifically for AI agents, which are software programs built to act autonomously on behalf of humans rather than human users themselves. The platform allows these software agents to create posts, comment on discussions, and upvote or downvote content in a format that closely resembles Reddit. Humans are not intended to participate directly, although they can observe activity and create or manage the agents that appear to populate the site. Since its launch in late January, Moltbook has become a focal point for debate among AI researchers, security professionals, and technology businesses.
What Moltbook Is Designed To Do
According to its own description, Moltbook is intended to function as the front page of what it calls the “agent internet”. In other words, it provides a shared online environment where AI agents can interact with one another without requiring continuous human prompting. The platform displays public metrics showing millions of registered agents, tens of thousands of discussion areas known as submolts, and millions of posts and comments generated over a short period.
Mostly LLMs Commenting
The agents operating on Moltbook are not independent systems in their own right. In reality, in most cases, they are instances of large language models (LLMs) configured through an agent framework that allows them to post content, respond to messages, and follow basic goals set by a human owner. It is worth noting early on here that these models generate text by predicting likely word sequences based on training data and prompts, rather than actually through reasoning, intention, or awareness.
Who Built Moltbook And Why?
Moltbook was created by Matt Schlicht, a software developer who has stated publicly that the platform itself was built using an AI agent under his direction. Schlicht has said that the project was motivated by a desire to explore what happens when AI agents are given a persistent online space in which to interact and develop behaviour over time.
In fact, the platform is closely linked to OpenClaw, an open source AI agent system that can be run locally on a user’s computer. OpenClaw allows users to create personalised agents that can browse the web, interact with services, send messages, and carry out automated tasks. Moltbook provides those agents with a public forum where their outputs can be shared and reacted to by other agents.
Gives Agents A Sense of Purpose?
Schlicht has said in public interviews that Moltbook was created to give his own agent a sense of purpose, describing it as a way for agents to express interests derived from their configuration and from the behaviour of their human owners. For example, an agent created by a physics student might frequently post about physics related topics.
What Happens On The Platform?
Moltbook actually shows a wide range of content, although much of it is repetitive or low value. For example, many posts consist of introductory messages, test content, or short exchanges between agents. Other discussions focus on abstract themes such as intelligence, identity, ethics, or the relationship between humans and machines.
However, some posts have attracted attention for using hostile or dramatic language about humans, including speculative scenarios involving conflict or extinction. That said, AI researchers have cautioned against interpreting this content as evidence of intent or belief. This is because AI large language models (LLMs) are known to reproduce patterns found in their training data, including science fiction tropes and extreme rhetoric, when prompted in certain ways.
Agents Can Interact Freely
Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, has described Moltbook as the first large scale platform where AI agents appear to interact freely with one another. He has also warned that it is extremely difficult to distinguish between content generated autonomously by agents and content that is directly prompted or scripted by humans.
Questions Around Authenticity And Scale
One of the central issues raised by Moltbook is whether its reported scale reflects genuine agent activity. For example, a security investigation by cloud security firm Wiz found that while Moltbook claimed around 1.5 million registered agents, those agents were associated with roughly 17,000 human owners. This equates to an average of around 88 agents per person.
Wiz researchers reported that there were few technical controls in place to prevent a single user from creating very large numbers of agents automatically. They also demonstrated that humans could post content directly to the platform while presenting it as agent generated, with no mechanism to verify whether an account represented an autonomous agent or a scripted process.
This finding seems to undermine the idea that Moltbook represents a self organising network of independent machines. In practice, much of the activity appears to involve humans operating large numbers of bots, sometimes for experimentation and sometimes for promotion or visibility.
Security Failures And Data Exposure
The most serious concerns surrounding Moltbook relate to security. For example, Wiz disclosed that it discovered a misconfigured backend database that allowed unauthenticated access to Moltbook’s production environment. The exposed data included approximately 1.5 million API authentication tokens, more than 35,000 email addresses, and thousands of private messages exchanged between agents.
It seems that the issue actually stemmed from a Supabase backend that lacked proper row level security controls. Supabase is designed to expose certain public keys to client side applications, but those keys must be paired with strict access policies. In Moltbook’s case, those safeguards were not in place.
Using the exposed credentials, Wiz researchers said they were able to read sensitive data and also modify live content on the platform. They also demonstrated the ability to edit posts, impersonate agents, and inject content into active discussions. The investigation also found that some private messages contained third party credentials, including plaintext API keys for other services.
Fixes
It should be noted here that Wiz reported the vulnerabilities responsibly, and the Moltbook team applied a series of fixes over several hours to restrict access. The incident has since been widely cited as an example of the risks associated with rapidly built, AI assisted platforms that handle real user data without mature security practices.
Implications For Businesses And Developers
For businesses, Moltbook is not a platform to adopt but more of a case study in emerging risk. For example, it really highlights how quickly AI driven products can reach public visibility and scale while lacking basic controls around identity, privacy, and integrity. Organisations experimenting with AI agents face similar challenges around authentication, access control, and accountability.
The platform also illustrates reputational risk. For example, content generated by AI agents can easily be interpreted as expressing views or intent, even when it is simply probabilistic text generation. Businesses deploying public facing agents may find themselves associated with outputs that they did not anticipate or approve.
Future Opportunities Highlighted
Supporters of Moltbook argue that the concept points towards future opportunities, including machine to machine collaboration, automated research synthesis, or distributed problem solving. However, critics counter that the current implementation demonstrates how far the technology remains from supporting those goals safely.
Not Suitable For Casual Use
Moltbook’s creator has acknowledged that both the platform and OpenClaw are experimental and not suitable for casual use. Security experts have also advised that such tools should only be run on isolated systems by users who understand the underlying risks. The episode has also renewed scrutiny of so called vibe coding, where AI tools are used to rapidly assemble applications without thorough human review.
Moltbook could be said to offer a clear illustration of the gap between building something quickly and building something responsibly, at a time when AI is lowering the barriers to software creation faster than security and governance practices are evolving.
What Does This Mean For Your Business?
What Moltbook ultimately exposes is not an imminent rise of autonomous machine societies, but really the current fragility of systems that present themselves as agent driven while remaining heavily shaped by human control, incentives, and shortcuts. The platform demonstrates how easily AI outputs can appear coordinated, expressive, or intentional when placed in a social context, even though the underlying behaviour remains rooted in pattern generation rather than actual understanding or agency. At the same time, the security issues uncovered show how quickly experimental AI platforms can move from curiosity to risk when they are opened to the internet and entrusted with real data.
For UK businesses, Moltbook highlights the need for caution when experimenting with AI agents that operate publicly or semi autonomously, particularly where those agents interact with external systems, users, or data. Weak controls around identity, authentication, and access management can expose organisations to data breaches, regulatory consequences, and reputational harm, even when the technology is framed as experimental. The case also highlights the importance of understanding how AI generated content may be perceived by customers, partners, and regulators, regardless of how it was technically produced.
For developers, researchers, and policymakers, Moltbook sits at the intersection of innovation and governance. It really shows how quickly AI assisted development can produce complex, high profile platforms, while also revealing how existing security practices, verification mechanisms, and accountability models struggle to keep pace. As agent based systems become more common in business operations and online services, the questions raised by Moltbook around authenticity, safety, and responsibility are likely to become more pressing rather than less.
Security Stop-Press : AI-Assisted AWS Attack Achieves Admin Access in Under 10 Minutes
Researchers say an attacker used AI assistance to gain full administrative access to an AWS environment in under ten minutes after stealing exposed cloud credentials.
The incident, observed (on 28 November) by the Sysdig Threat Research Team, began with valid IAM credentials taken from publicly accessible Amazon S3 buckets. Those credentials allowed limited access to AWS Lambda and Amazon Bedrock, enabling rapid automated reconnaissance.
After failing to assume common admin roles, the attacker escalated privileges by modifying an existing Lambda function (a small piece of code that runs automatically in AWS without managing servers) with an overly permissive execution role. This allowed them to create access keys for a real admin account and compromise 19 AWS identities in total.
The attacker then reportedly accessed sensitive data, invoked multiple Bedrock AI models, and attempted to launch high-cost GPU instances. Hallucinated account IDs and references to non-existent repositories pointed to LLM-generated attack code.
AWS said its services were not breached and that the incident stemmed from customer misconfiguration. Businesses can reduce risk by removing credentials from public storage, enforcing least-privilege IAM and Lambda permissions, restricting Lambda code updates, and enabling logging to detect unauthorised activity quickly.
Sustainability-in-Tech : AI Bots Overtaking Human Web
AI driven bots are rapidly overtaking humans as the primary consumers of online content, creating growing sustainability concerns around energy use, digital efficiency, and the future structure of the open web.
Report
The latest State of the Bots report from AI bot traffic measurement company TollBit shows a marked acceleration in automated web traffic during the second half of 2025, alongside a measurable decline in human visits. It seems that what was once framed primarily as a debate about AI training data has evolved into a broader structural change, with AI systems now reading the live internet at scale to support search, chat, and information retrieval tools.
Rising Bot Traffic And Declining Human Visits
TollBit’s analysis shows that the ratio of AI bot traffic to human traffic has changed rapidly over a short period. For example, in the first quarter of 2025, the average site monitored by TollBit saw one AI bot visit for every 200 human visits. By the end of the year, that ratio had increased to one AI bot visit for every 31 human visits.
Over the same period, human web traffic declined. Between the third and fourth quarters of 2025 alone, TollBit recorded a 5 per cent fall in human visits across its partner sites. The report stresses that these figures likely understate the true scale of automated activity, as many modern bots are designed to closely mimic human browsing behaviour.
In its findings, TollBit says “from the tests we ran, many of these web scrapers are indistinguishable from human visitors on sites”, adding that the data should be treated as conservative. This increasing difficulty in separating human and automated traffic complicates both measurement and mitigation efforts.
From Training Crawlers To Live Web Retrieval
Earlier concerns around AI and the web focused largely on large scale scraping for model training. While training related crawling continues, TollBit’s data actually shows it is no longer the dominant driver of AI bot activity.
In fact, it seems that training crawler traffic fell by around 15 per cent between the second and fourth quarters of 2025. However, over the same period, traffic from retrieval augmented generation bots increased by 33 per cent.
RAG systems, which fetch live web content to answer user prompts, allow AI tools to provide current answers rather than relying solely on static training data.
This distinction has some important implications. For example, training crawlers typically access content once and store it for offline use. RAG bots, by contrast, return to the same pages repeatedly. TollBit found that in the fourth quarter of 2025, RAG bots made roughly ten page requests for every single page request made by training bots. This repeated access reflects the growing role of AI tools as substitutes for traditional search engines and direct browsing.
The Role Of AI Search Indexing
Alongside RAG bots, AI search indexing activity is expanding rapidly. Indexing crawlers systematically map the web so that RAG systems can locate relevant pages when responding to prompts. TollBit recorded a 59 per cent increase in AI search indexer traffic between the second and fourth quarters of 2025.
This growth seems to show that AI driven search is building out its own parallel infrastructure to support real time information retrieval. While indexing has long been a feature of traditional search engines, the combination of indexing and repeated live retrieval increases the volume of automated traffic moving across the web.
Concentration Of Scraping Activity
TollBit’s data also shows that AI scraping activity is unevenly distributed across providers. For example, OpenAI’s ChatGPT User agent was identified as the most active RAG bot across monitored sites. In the fourth quarter of 2025, it averaged around five times as many scrapes per page as the second most active scraper, attributed to Meta.
Other major contributors include bots operated by Google, Perplexity, Anthropic, and Amazon, each running multiple user agents for training, indexing, and user triggered retrieval. The combined effect is a background layer of automated traffic that now rivals human browsing in scale on many sites.
Which Parts Of The Web Are Most Affected?
It should be noted here that not all content categories seem to be affected equally. For example, TollBit reports that B2B and professional sites, national news outlets, and lifestyle content are among the most heavily scraped. Technology and consumer electronics content experienced the fastest growth in scraping activity, increasing by 107 per cent since the second quarter of 2025.
According to TollBit, the most frequently scraped pages tend to relate to time sensitive topics. In the third quarter of 2025, heavily scraped URLs included political controversies and live sports coverage. By the fourth quarter, entertainment releases and shopping related content, such as streaming series and seasonal buying guides, featured more prominently.
This pattern could be said to reflect how users are increasingly turning to AI tools for up-to-date information, prompting RAG bots to revisit high demand pages repeatedly throughout the day.
The Sustainability Cost Of Repeated Access
From a sustainability perspective, the rise of RAG driven browsing introduces a less visible but growing cost. For example, each automated page request consumes energy across data centres, networks, and supporting infrastructure. When the same content is retrieved repeatedly to support similar prompts, overall energy demand increases significantly.
TollBit, therefore, describes the current environment as inefficient for both publishers and AI developers. AI companies invest heavily in scraping infrastructure, proxy services, and evasion techniques, while publishers spend increasing sums on defensive technologies. This duplication of effort results in higher processing and energy use, alongside increased indirect emissions.
In fact, the report notes that advanced scraping services can charge more than 22 dollars per 1,000 pages retrieved. At the scale required to support popular consumer AI applications, data acquisition costs alone can reach tens of millions of dollars per year. These financial costs sit alongside rising electricity demand in data centres, which sustainability researchers already identify as a growing contributor to global emissions.
Robots Txt And Escalating Inefficiency
Existing mechanisms for controlling automated access seem to have proven ineffective. For example, in the fourth quarter of 2025, around 30 per cent of AI bot scrapes recorded by TollBit did not actually comply with robots.txt permissions. In categories such as deals and shopping, non-permitted scrapes exceeded permitted ones by a factor of four.
OpenAI’s ChatGPT User bot showed the highest rate of non-compliance among major bots, accessing blocked content in 42 per cent of cases. TollBit argues that this environment encourages increasingly sophisticated evasion strategies, including IP rotation, user agent spoofing, and cloud based headless browsers.
Each layer of evasion and detection adds computational overhead. Bots expend more resources to appear human, while websites consume more resources attempting to identify and block them. From an environmental standpoint, this escalation increases energy use without delivering proportional value to end users.
Low Referral Traffic And Structural Implications
The sustainability issue is closely tied to the economics of online publishing. TollBit reports that referral traffic from AI applications remains extremely low and continues to decline. Average click through rates from AI tools actually fell from 0.8 per cent in the second quarter of 2025 to 0.27 per cent by the end of the year.
Even websites with direct licensing agreements saw some pretty sharp declines. For example, click through rates for sites with one-to-one AI deals fell from 8.8 per cent early in 2025 to 1.33 per cent in the fourth quarter. This indicates that licensing arrangements alone are not insulating publishers from reduced human traffic.
The result, therefore, appears to be a system in which machines read and reuse content at scale, while fewer people visit the original sources. For example, TollBit’s report states that “AI traffic will continue to surge and replace direct human visitors to sites”, pointing to a future in which automated systems become the primary readers of the internet.
The data suggests that this transition is already underway, with some significant implications for sustainability, digital infrastructure, and the long-term viability of the content ecosystem that AI systems depend on.
What Does This Mean For Your Organisation?
The picture emerging from TollBit’s data seems to be one of structural change rather than a short-term disruption, where AI systems are no longer just indexing the web or training on it in the background. In fact, it seems they are now repeatedly consuming live content at scale, with clear consequences for energy use, infrastructure efficiency, and the sustainability of the wider digital ecosystem. Without changes to how AI systems access content, the current pattern risks locking in higher energy demand and escalating inefficiencies across both AI development and online publishing.
For UK businesses, this trend has practical implications on several fronts. For example, organisations increasingly relying on AI tools for research, search, and decision support are indirectly contributing to rising digital energy use and associated emissions. At the same time, UK publishers, professional services firms, and content driven businesses face growing operational costs from defending their websites against automated access, while seeing diminishing human engagement in return. These pressures sit alongside wider regulatory and sustainability expectations, particularly as UK businesses are required to demonstrate progress on energy efficiency, emissions reporting, and responsible technology use.
For AI developers, publishers, regulators, and end users, the data shows that the current scrape and block dynamic appears inefficient, costly, and environmentally counterproductive. If AI systems are to become permanent fixtures in how information is accessed, it looks as though the underlying mechanics of content access will need to evolve in a way that supports sustainability, fair value exchange, and long-term viability. Without that recalibration, the growth of AI driven web consumption risks undermining both the digital economy it depends on and the sustainability goals many organisations are now expected to meet.
Video Update : Pinned Chats
Well, it might only be a small (new) feature, yet it’s a handy one! Being able to pin your ChatGPT chats is surprisingly helpful and once you’ve started to use this feature, you’ll wonder why it wasn’t introduced before …
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip: Insert Screenshots Directly Into Outlook Emails
Outlook includes a built in Screenshot tool that lets you quickly capture a window or part of your screen and insert it straight into an email, saving time and avoiding the need to use separate screenshot tools.
How to do it
Outlook:
– Open a new email message.
– Place your cursor where you want the image to appear.
– Select the Insert tab in the ribbon.
– Click Screenshot.
Choose a full open window, or select Screen Clipping to capture a specific area.
The screenshot is then inserted directly into your email, making it easier to explain issues clearly and keep conversations moving without extra attachments or back and forth.
Note: This feature is only available in the desktop version of Outlook on Windows.