Tech Insight : ‘Operator’ – Agents That Automate Web Tasks
OpenAI has introduced ‘Operator’, a new AI-powered agent designed to autonomously perform web-based tasks on behalf of users.
Just In The US For Now
Currently available as a research preview, Operator is accessible to ChatGPT Pro subscribers in the United States, with plans to expand availability in the near future. This launch signifies a major step in OpenAI’s efforts to redefine how artificial intelligence interacts with the digital world.
What is Operator?
At its core, Operator is an AI agent capable of navigating the web much like a human would. An ‘AI agent’ is essentially a software program that autonomously performs tasks or actions on behalf of a user.
Powered by OpenAI’s Computer-Using Agent (CUA) model, Operator can complete tasks such as booking travel, ordering groceries, and even creating memes. It interacts with websites via simulated mouse clicks, scrolling, and typing, mirroring how a person would operate a browser.
Unlike traditional AI integrations that rely on APIs, Operator uses its ability to interpret screenshots and graphical interfaces. This makes it adaptable to various websites, even those without specific developer tools or APIs. OpenAI CEO Sam Altman describes Operator as “an early glimpse into the future of AI agents automating our digital interactions.”
However, Operator is not perfect. OpenAI has explicitly labelled it as a “research preview”, cautioning users about potential mistakes and urging active supervision during high-stakes tasks.
How Does Operator Work?
Operator is built on GPT-4o, a specialised version of OpenAI’s flagship model, which combines advanced reasoning capabilities with vision technology to interpret on-screen elements. Users can initiate tasks by describing them in natural language. For example:
– “Book a flight from London to Madrid for next Thursday.”
– “Order my weekly groceries from Instacart.”
– “Make a dinner reservation for two at an Italian restaurant in central London.”
Operator then uses its dedicated browser to execute the task, visible to the user via a pop-up window. It can navigate menus, fill out forms, and confirm actions. If it encounters challenges (e.g. CAPTCHAs, password fields, or a particularly complex interface) it pauses and prompts the user to intervene. Once the issue is resolved, the user can hand control back to Operator, ensuring seamless collaboration.
Operator also allows users to save frequently performed workflows as reusable tasks, which can be started with a single click. Also, it supports sharing video recordings of completed tasks, enabling users to showcase or review the agent’s actions.
Availability and Pricing
For now, Operator is a research preview that’s exclusive to ChatGPT Pro users in the United States, with the Pro plan costing $200 per month. OpenAI plans to roll out the feature to other tiers, such as Plus, Team, and Enterprise subscriptions, as well as expand its availability to users in other countries. However, Altman has noted that European expansion may face delays due to regulatory hurdles.
Safety, Privacy, and Limitations
Although software operating autonomously sounds a little risky, OpenAI has emphasised safety as a cornerstone of Operator’s design. For example, the tool includes multiple safeguards, such as user confirmations for critical actions, refusal patterns for prohibited tasks, and monitoring for suspicious activity. Operator also requires users to manually handle sensitive inputs like credit card details or passwords. In terms of privacy, OpenAI also assures users that these interactions are not logged or captured in screenshots.
Uses Screenshots To “See”
Screenshots, which Operator uses to “see” and interact with interfaces, are securely stored and can be deleted by the user. OpenAI says Operator retains user data for up to 90 days unless deleted earlier, thereby giving users some control over their privacy.
However, despite its impressive capabilities, Operator is limited in several key areas, such as:
– It can’t really perform complex or specialised tasks, such as creating detailed presentations or managing intricate calendar systems.
– High-stakes actions, such as sending emails or conducting financial transactions, are restricted in this early stage (which is perhaps just as well!).
– Usage is subject to rate limits to prevent overloading the system.
Benefits and Criticisms
Some of the key benefits of Operator could be summed up as:
– Enhanced productivity i.e., Operator automating repetitive tasks, frees up time for users.
– Broad applicability. Its ability to interpret GUIs makes it versatile across a wide range of websites.
– Customisation. Users can save workflows for regular use, streamlining frequent activities.
– Collaboration with businesses. Partnerships with platforms like DoorDash, Uber, and Instacart can ensure smooth operation and compliance with terms of service.
Inevitably, with something this complex that’s still in its preview stage, where it hasn’t been widely used by millions of users yet, there are some potential issues and concerns. For example:
– Reliability concerns. As a research preview, Operator may not perform flawlessly, and may require quite a bit of human oversight.
– Privacy risks. While OpenAI has implemented robust safeguards, the reliance on screenshots and data retention has raised concerns among privacy advocates.
– Accessibility. The pretty steep $200 monthly subscription fee limits may prove a barrier to less affluent users and organisations with more modest budgets.
– Ethical considerations. The potential misuse of autonomous AI agents, such as for phishing scams or malicious activity, could prove to be a significant challenge.
The ‘World’ Project
Operator is not an isolated innovation. In fact, it forms part of a broader vision spearheaded by OpenAI’s Sam Altman. His ‘World’ project, formerly known as Worldcoin, aims to address the growing challenge of distinguishing humans from AI agents in digital spaces. By scanning users’ irises with a metallic orb, World creates blockchain-based digital identities, known as World IDs, to verify “proof of personhood.”
Why?
World is now exploring how to link AI agents like Operator to these digital identities. This would allow businesses and users to confirm that an agent is acting on behalf of a real person. For example, an Operator task could be tagged with a verified World ID, thereby ensuring trustworthiness in sensitive interactions such as ticket purchases or legal transactions.
Criticism of World
While the concept is ambitious, it has faced significant criticism. For example, World’s reliance on biometric data has raised privacy concerns, and the project has faced regulatory scrutiny in Europe. That said, proponents argue that linking AI agents to verified identities shows promise and could foster trust and mitigate risks in a rapidly evolving digital ecosystem.
What Does This Mean For Your Business?
OpenAI’s Operator gives a fascinating glimpse into the future of AI, where software agents can automate an increasing number of tasks on behalf of users. By leveraging its ability to interact with websites much like a human, Operator offers an innovative and adaptable approach to web-based automation. Its potential to save time, streamline processes, and improve productivity is undeniable, particularly for users and businesses willing to invest in the technology and learn to navigate its current limitations.
However, as promising as Operator may be, it is still a work in progress. As a research preview, it is not yet fully reliable, with OpenAI itself acknowledging the need for active user supervision and manual intervention in many situations. While there do appear to be safeguards in place around privacy and sensitive data handling, there is still a long way to go to address concerns about security, privacy, and ethical use. For now, Operator’s high price point and restricted availability may make it inaccessible to a broader audience, thereby limiting its immediate impact.
The larger vision behind Operator, as part of Sam Altman’s interconnected strategy with the World project, offers a glimpse into the challenges and opportunities of an AI-driven future. By linking AI agents to verified digital identities, OpenAI and World could help foster trust and transparency in a landscape increasingly populated by bots and automated systems. While the concept holds promise, it also raises significant questions about privacy, control, and the implications of such systems for individual autonomy and online interactions.
Operator is an ambitious step forward in AI innovation, but it is also a reminder of the complexities that come with introducing such transformative technologies. Its success will depend not only on its technical evolution but also on OpenAI’s ability to address the legitimate concerns surrounding its use, ensuring it becomes a tool that enhances lives rather than complicating them. As the technology matures and expands to more users, Operator could redefine how we interact with the digital world, but only if its deployment is handled responsibly, transparently, and inclusively.
Tech News : Google Combats Fake Reviews (After Investigation)
Following an extensive investigation by the UK’s Competition and Markets Authority (CMA), Google has agreed to implement significant changes to its processes for detecting and addressing fake reviews.
To Improve Transparency and Trust
This landmark development is essentially aimed at ensuring fairness for consumers and businesses in an online marketplace that’s increasingly influenced by customer reviews. It’s hoped, therefore, that the new measures will improve transparency and trust in online reviews while providing consequences for businesses and individuals engaging in dishonest practices.
What’s The Problem With Fake Reviews?
Online reviews have become a powerful tool in shaping consumer decisions, with the CMA estimating that a staggering £23 billion of UK consumer spending is influenced annually by such reviews. Research indicates that 89 per cent of consumers actually rely on online reviews when deciding on products or services. However, today’s proliferation of fake reviews threatens to undermine trust in these platforms.
The issue with fake reviews is that they can create an uneven playing field by misleading consumers into choosing poorly-reviewed products or services and giving unethical businesses an unfair advantage. The problem is exacerbated by the increasing sophistication of fake review schemes, including paid reviews and bot-generated content.
Google and Amazon in the Frame
Concerns about the authenticity of reviews prompted the CMA to launch investigations into Google and Amazon back in June 2021. While Google has now reached an agreement with the CMA, the investigation into Amazon’s practices remains ongoing.
Why Was Google Under Investigation?
In the case of Google, the CMA’s investigation revealed shortcomings in its systems for detecting, removing, and preventing fake reviews. These gaps included insufficient action against suspicious patterns of behaviour and inadequate enforcement against businesses and reviewers engaged in fraudulent activity. The CMA’s scrutiny of Google centred on its compliance with consumer protection laws, particularly regarding the responsibilities of platforms hosting user-generated reviews.
Sarah Cardell, Chief Executive of the CMA, highlighted the broader implications of fake reviews, saying: “Left unchecked, fake reviews damage people’s trust and leave businesses who do the right thing at a disadvantage.”
The urgency of the issue has now led to the CMA having to secure legally binding commitments from Google, ensuring a more robust and transparent approach to tackling the problem.
Key Changes Google Has Agreed to Implement
In response to the CMA’s findings, Google has pledged to introduce several sweeping changes to its review system. These measures are aimed at detecting and deterring fake reviews, penalising offenders, and restoring consumer confidence in online reviews. The key undertakings agreed with the CMA by Google are:
– Enhanced detection of fake reviews. Google says it will employ more rigorous methods to identify and remove fake reviews, leveraging advanced technology and manual oversight to investigate suspicious activities. This should enable quicker and more accurate responses to fraudulent practices.
– Consequences for rogue reviewers. Individuals repeatedly posting fake or misleading reviews for UK businesses will face severe penalties. Their reviews will be deleted, and they will be banned from posting new reviews on Google, irrespective of their location.
– Sanctions for businesses engaging in fake reviews. Businesses found to be using fake reviews to inflate their star ratings will face visible warnings on their Google profiles. These alerts will inform consumers of detected suspicious activity. Additionally, businesses engaging in repeated misconduct will have all reviews removed for six months or more and will lose the ability to receive new reviews.
– Improved reporting mechanisms. Google will introduce a more robust reporting system, enabling consumers to easily report suspicious reviews or incentives offered for positive reviews. This will apply to both online and offline inducements.
– Regular oversight and reporting to the CMA. Google will report to the CMA over the next three years to ensure compliance with these commitments. This ongoing scrutiny will provide accountability and ensure that the changes are effectively implemented.
– Adaptation to evolving technology. After the three-year period, Google will have the flexibility to adapt its processes to address new challenges posed by advancements in technology, including artificial intelligence-driven fake reviews.
The Wider Implications for Businesses and Consumers
These changes could be a major step forward in the fight against fake reviews and signal Google’s commitment to trying to foster a fairer digital marketplace. As the CMA’s Sarah Cardell says, “The changes we’ve secured from Google ensure robust processes are in place, so people can have confidence in reviews and make the best possible choices. They also help to create a level-playing field for fair dealing firms.”
Consumer advocacy groups, including Which?, have welcomed the CMA’s success in securing these commitments. Rocio Concha, Director of Policy and Advocacy at Which?, also emphasised the importance of monitoring Google’s compliance, saying: “The regulator must monitor the situation closely and be prepared to use new enforcement powers… to take strong action, including issuing heavy fines, if Google fails to make improvements.”
The Broader Regulatory Context
This development comes as the UK government is trying to strengthen consumer protection laws. For example, the Digital Markets, Competition and Consumers Act 2024, which actually comes into force in April 2025, will empower the CMA to independently determine breaches of consumer law without needing court approval. This legislation also introduces the potential for fines of up to 10 per cent of a company’s global turnover for non-compliance.
Also, the CMA has collaborated with the Department for Business and Trade to explicitly ban the posting or commissioning of fake reviews. Businesses that fail to address fake reviews and hidden advertising will face penalties under these new rules.
The CMA’s work extends beyond Google. As part of its broader effort to ensure fair online practices, the regulator has issued draft guidance to help businesses comply with consumer law. This guidance will be finalised later in 2025.
Industry Response and the Road Ahead
Google has expressed its commitment to combating fake reviews. A spokesperson for the company stated, “Our longstanding investments to combat fraudulent content help us block millions of fake reviews yearly – often before they ever get published. Our work with regulators around the world, including the CMA, is part of our ongoing efforts to fight fake content and bad actors.”
These changes highlight the influence of consumer feedback in shaping marketplace dynamics. By holding businesses and reviewers accountable, the CMA’s actions, therefore, aim to restore trust in online reviews and ensure that genuine businesses are not overshadowed by dishonest competitors.
As the CMA continues its investigation into Amazon and monitors compliance across the sector, this case sets a precedent for how regulatory bodies can work with tech giants to protect consumers and promote fair competition. The changes promised by Google are not just about tackling fake reviews but are also about reinforcing the integrity of the digital marketplace.
What Does This Mean For Your Business?
Google’s commitment to tackling fake reviews, under the watchful eye of the CMA, is quite a significant step towards restoring trust and fairness in the online marketplace. For businesses, these changes could clearly help in levelling the playing field. Ethical firms that rely on genuine customer feedback may finally see their efforts shielded from the unfair advantage enjoyed by competitors using dishonest practices. By penalising those who manipulate review systems, Google and the CMA are setting a clear standard that prioritises transparency and fairness.
As an initial reaction, it’ll be interesting to see whether it’s possible to ‘black hat’ the reviews for a competitor’s business, by deliberately leaving fake reviews in the hope the business will be penalised.
For consumers, this development may be equally impactful. With nearly 90 per cent of shoppers relying on reviews when making purchasing decisions, the assurance that review platforms are working harder to weed out fraudulent content is critical. The addition of more robust detection measures, clearer warnings, and improved reporting mechanisms will empower consumers to make better-informed choices. The visibility of warnings on business profiles and the suspension of review functions for repeat offenders will also serve as valuable signals, allowing customers to avoid potentially unscrupulous businesses.
However, while the measures introduced by Google are promising, their ultimate success hinges on consistent enforcement. As Which? has pointed out, these changes must be backed by strong oversight and, where necessary, punitive measures for non-compliance. The CMA’s ongoing role in monitoring Google’s implementation of these commitments will be pivotal in ensuring that promises translate into real-world impact.
The broader implications for the online marketplace are also worth noting. The CMA’s proactive stance and collaboration with the Department for Business and Trade send a clear message that unethical behaviour will no longer be tolerated. With stronger consumer laws on the horizon, businesses will need to adopt more rigorous review policies to avoid regulatory scrutiny and potential fines. These developments could encourage the entire sector to adopt higher standards, fostering an environment where consumers and honest businesses can thrive.
Looking ahead, the digital marketplace is likely to face new challenges as technology evolves. AI, for example, has already made the creation of fake reviews more sophisticated, posing fresh hurdles for platforms like Google. However, the commitments secured by the CMA ensure that Google’s approach will remain adaptable to emerging threats, keeping pace with technological advancements.
The CMA’s intervention has, therefore, set a precedent for holding powerful tech companies accountable and ensuring that consumer interests are protected. By cracking down on fake reviews, Google’s new measures offer a pathway to rebuilding trust in online platforms. While challenges remain, this initiative signals a shift towards a more transparent and equitable digital landscape, where authenticity and fairness take centre stage. For businesses and consumers alike, these changes could (hopefully) prove transformative, reinforcing the integrity of a marketplace increasingly driven by the voice of the customer.
Tech News : John Lewis Introduces AI Verification For Online Knife Sales
John Lewis has unveiled a groundbreaking AI tool to verify the age of customers purchasing knives online, marking a shift in how retailers address legal requirements for the sale of bladed items.
According to the AI provider, their AI (which estimates the age of the user from their image) is “Better Than Human Judgement”.
Why Is John Lewis Introducing AI for Age Verification?
The decision to implement an AI-driven facial age estimation system stems from a broader effort to prevent underage access to knives amidst increasing scrutiny of age verification processes. This move comes as part of the retailer’s commitment to safety and compliance with government regulations while reintroducing online knife sales after a 15-year hiatus, against the backdrop of high-profile cases, such as the tragic murders linked to underage perpetrators purchasing knives online, which have reignited debates about stricter controls on bladed items.
John Lewis stopped selling knives online in 2009 due to the difficulty of verifying buyers’ ages effectively. By 2022, the retailer went a step further, removing cutlery knives from its online catalogue. However, the retailer has now reintroduced these products, citing confidence in the efficacy of AI-powered age estimation technology to meet strict legal and ethical requirements.
As a spokesperson for John Lewis recently explained: “We take safety incredibly seriously, and in line with strict government guidelines, have added an additional layer of security when customers purchase knives online.”
How Does the AI Tool Work?
The facial age estimation technology, developed by British company Yoti, analyses a photograph of the customer’s face to determine whether they are over 18. This streamlined process occurs at the point of purchase and takes only a few seconds. Customers are prompted to enable their device’s camera and position their face within a frame on the screen, akin to using a passport photo booth.
The AI system then estimates the individual’s age and immediately deletes the image once verification is complete. If the system determines the customer is over 18, they can proceed to checkout. For those who do not pass this initial check, an alternative verification method is available, allowing customers to upload a photo of their ID and a selfie to confirm their identity. Accepted forms of ID include passports, driving licences, and other official identification cards.
In addition to this online verification, a second layer of age checking occurs at delivery. For example, Royal Mail or DPD couriers require customers to present valid photo identification, such as a passport or driving licence, before handing over the parcel. If the recipient cannot provide proof of age, the item is returned to John Lewis, and a refund is issued.
What Technology Powers the Tool?
Yoti’s AI age estimation system relies on advanced machine learning algorithms trained on millions of images paired with verified ages. The technology does not rely on facial recognition, meaning it does not match the scanned face to a database of images or identities. Instead, it estimates age based on facial characteristics and deletes the image immediately after processing.
Better Than Human Judgement, Says Yoti
Yoti claims the system offers superior accuracy compared to human judgment. For example, for individuals aged 13–24, the tool estimates age within a margin of 1.3 years. The tool’s accuracy rate for correctly identifying 13–17-year-olds as under 18 is an impressive 99.3 per cent, with negligible variance across different skin tones, according to a 2023 white paper. The system also incorporates anti-spoofing technology to prevent attempts to bypass the check using photos, masks, or deepfake videos.
The Benefits of the System
The reintroduction of online knife sales by John Lewis demonstrates the potential of AI to address regulatory challenges while improving customer convenience. For the retailer, the technology enables compliance with laws requiring age verification at the point of sale and delivery.
The integration of this technology is expected to reduce the administrative burden associated with manual ID checks while offering customers a seamless and fast checkout process. Also, the system helps protect public safety by reducing the risk of knives falling into the hands of minors.
Commander Stephen Clayman of the National Police Chiefs’ Council was recently quoted (in The Times) praising the initiative, saying: “We welcome technology which can help to ensure knives do not end up in the wrong hands. Responsible retailing is a key element in this, and innovations like this are a step in the right direction.”
Privacy-Focused
One other key compliance benefit of the tool is that it’s also privacy-focused, as no images or personal data are stored, shared, or used for further training. This ensures compliance with data protection regulations and alleviates concerns about surveillance.
Challenges and Criticisms
Despite its benefits, the system is not without its challenges. One concern is the tool’s reliance on accurate camera functionality, which may exclude customers who lack access to modern devices or are unfamiliar with using such technology. Customers experiencing technical issues may find the process cumbersome, particularly if they need to switch to the manual ID verification method.
Another issue lies in potential inaccuracies. While the system boasts a high degree of accuracy, its effectiveness diminishes slightly for edge cases, e.g. individuals who appear significantly older or younger than their actual age. Critics have also pointed out that, although rare, the slight variation in accuracy across skin tones highlights an area where further refinement is needed.
Also, broader societal concerns remain about over-reliance on AI in public-facing applications. Privacy advocates, for example, have cautioned against the widespread adoption of AI for age verification, arguing that such systems, while anonymised, may normalise invasive technologies.
A Retail Trend?
It should be noted here that John Lewis is not alone in adopting AI for age verification. For example, Yoti’s technology is already used by social media platforms, alcohol retailers, and other businesses requiring age-restricted transactions. The wider adoption of AI age estimation tools could represent a turning point in retail, enabling businesses to meet regulatory demands while enhancing customer experience.
With the UK government considering stricter regulations on knife sales, including potential requirements for multiple forms of ID, John Lewis’ proactive use of technology may set a precedent for other retailers. As the national conversation around knife crime continues, innovations like this highlight the role of technology in tackling complex societal challenges.
By blending cutting-edge AI with robust checks and balances, John Lewis may have found a way to navigate a path forward in a contentious area of retail, but the journey is far from over. How other retailers respond, and whether customers embrace or resist this technological shift, remains to be seen.
What Does This Mean For Your Business?
By integrating advanced facial age estimation technology into its operations, the retailer has taken a proactive, technology-led approach to tackling what has been, up until now, a complex issue. This initiative has allowed John Lewis to re-enter the online knife market after years of hiatus, balancing customer convenience with security and showcasing the transformative potential of AI in retail.
However, as with any technological innovation, the implementation of such systems raises broader questions. While the facial age estimation tool offers a streamlined and privacy-focused solution, it is not without limitations. Issues such as accessibility for those without modern devices, potential inaccuracies at the margins of the system’s age-detection capabilities, and ongoing concerns about the normalisation of AI in public-facing applications highlight areas for further development and debate.
The integration of a secondary verification step, requiring proof of age upon delivery, ensures an additional layer of security. This dual-layered system strengthens compliance and demonstrates John Lewis’ commitment to responsible retailing. At the same time, it underscores the importance of redundancy in technological systems to account for potential failures or inaccuracies in AI processes.
While this initiative could position John Lewis as a leader in leveraging AI for compliance, it may also signal the beginning of a broader trend within the retail sector. As more businesses explore AI-based solutions for age-restricted sales, a wider conversation about the ethical, practical, and societal implications of these technologies is inevitable. The delicate balance between leveraging innovation for efficiency and ensuring equitable access and fairness will be crucial for widespread acceptance.
John Lewis’ adoption of AI age verification could offer a glimpse into the future of retail. It demonstrates how technology can address pressing regulatory and societal challenges, albeit with some caveats. Whether this approach becomes an industry standard or prompts further refinements in the application of AI remains to be seen, but what is clear is that this marks an important moment in the ongoing evolution of responsible retail practices. For now, John Lewis can say it has set a benchmark, but the effectiveness and reception of this technology will ultimately shape its long-term role in retail. No doubt other retailers will be watching with interest.
Company Check – LinkedIn : Allegations Of Using Private Messages To Train AI
LinkedIn, the professional networking giant owned by Microsoft, is under fire as a new lawsuit alleges the platform disclosed the private messages of its Premium customers to train generative AI models without consent.
The lawsuit, filed in California on behalf of Alessandro De La Torre and millions of other Premium subscribers, accuses LinkedIn of breaching contractual promises and violating US privacy laws.
The controversy centres on LinkedIn’s policy changes in 2024, which allowed user data to be used for AI training purposes. While LinkedIn exempted users in countries with stringent privacy regulations (e.g. the UK, EU, and Canada) from this practice, US users were automatically enrolled in the data-sharing programme unless they manually opted out. Crucially, the lawsuit alleges that LinkedIn extended this data-sharing to include the contents of private InMail messages, which often contain sensitive personal and professional information.
The lawsuit highlights the potential implications for users, stating that these private messages could include “life-altering information about employment, intellectual property, compensation, and other personal matters.” This, the plaintiff argues, breaches the LinkedIn Subscription Agreement (LSA), which explicitly assures Premium customers that their confidential information will not be disclosed to third parties. The complaint also points out that LinkedIn’s alleged failure to notify customers of these changes undermines user trust and constitutes a breach of the US Stored Communications Act.
LinkedIn has denied the allegations, labelling them as “false claims with no merit.” However, for many, the platform’s response to the privacy concerns raised last year casts a shadow over its denials. For example, in August 2024, LinkedIn introduced a setting allowing users to opt out of data-sharing for AI training, but this was enabled by default, raising questions about informed consent. Also, the platform discreetly updated its privacy policy in September 2024 to include the use of user data for AI training, with a notable caveat: opting out would not affect data already used to train models.
Some legal commentators have noted that this case could set a significant precedent for how social media platforms and tech companies handle user data in the age of AI. For example, as the plaintiff’s attorney, Rafey Balabanian, says: “This lawsuit underscores a growing tension between innovation and privacy,” and that “LinkedIn’s actions, if proven, represent a serious breach of trust, particularly given the sensitive nature of the information involved.”
The potential fallout for LinkedIn could extend beyond the courtroom. Premium customers, who pay up to $169.99 per month for features like InMail messaging and enhanced privacy, may, for example, choose to reconsider their subscriptions if these allegations prove true. Also, the case draws attention to the broader issue of how companies disclose and manage data for AI development, a concern that has already prompted regulatory scrutiny in regions like the UK and EU. Notably, the UK Information Commissioner’s Office (ICO) had earlier pressed LinkedIn to halt the use of UK user data for AI training, to which LinkedIn had agreed.
For users, this lawsuit serves as a reminder of the need to scrutinise privacy settings and policies. If successful, the plaintiffs seek damages, statutory penalties of $1,000 per affected user, and the deletion of any AI models trained using their data. With LinkedIn potentially facing financial and reputational damage, this case could act as a catalyst for greater transparency and accountability in the tech industry. Whether LinkedIn’s alleged actions were an oversight or a deliberate strategy to accelerate AI innovation, the coming months will undoubtedly shape the future of user privacy in the digital age.
Security Stop Press : Record-breaking DDoS Attack Highlights Growing Cybersecurity Threats
Cloudflare’s latest DDoS Threat Report for Q4 2024 highlights a dramatic surge in Distributed Denial of Service (DDoS) attacks, including a record-breaking 5.6 Tbps assault.
The web security and infrastructure company’s report reveals a 53 per cent year-over-year rise in DDoS activity, with Cloudflare blocking 21.3 million attacks in 2024, 6.9 million of which occurred in Q4, a staggering 83 per cent increase from the same period in 2023!
The largest attack, a 5.6 Tbps assault by a Mirai-variant botnet of over 13,000 IoT devices, targeted an ISP in Eastern Asia. Cloudflare says it mitigated it autonomously within seconds, preventing any disruption. Hyper-volumetric attacks exceeding 1 Tbps grew by 1,885 per cent quarter-over-quarter, reflecting the increasing scale and intensity of these threats. Nearly half of all attacks targeted OSI Layers 3 and 4, with the remainder focused on HTTP-based attacks, predominantly launched by botnets exploiting IoT devices.
Cloudflare’s report also highlighted how emerging attack methods like Memcached and BitTorrent DDoS vectors have seen dramatic growth, and ransom-motivated attacks surged by 78 per cent compared to Q3. The report also identifies telecommunications and marketing as the most attacked industries, with China, the Philippines, and Taiwan being key hotspots. Cloudflare says those responsible for the attacks include competitors, state-sponsored groups, and disgruntled users, highlighting diverse motives behind these incidents.
To counter these growing threats, businesses should deploy always-on, automated DDoS protection, secure all connected devices, and adopt proactive defence strategies. With attacks becoming faster and more sophisticated, real-time mitigation and robust security are critical to minimising risk.
Sustainability-in-Tech : Tiny Flying Robot Pollinators
Scientists at the Massachusetts Institute of Technology (MIT) have unveiled a new generation of tiny insect-inspired flying robots that could revolutionise agriculture by offering a mechanical alternative to natural pollinators.
The Vision Behind the Robotic Pollinators
Pollination is one of the most critical processes in food production, yet the decline in bee populations due to habitat loss, pesticides, and also with climate change posing a growing threat to global agriculture. Enter the robotic insect, a tiny flying marvel designed to fill the gap left by natural pollinators. Developed by a team led by Associate Professor Kevin Chen, head of MIT’s Soft and Micro Robotics Laboratory, these robots could “swarm out of mechanical hives” to pollinate plants with precision.
“With the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination,” Chen explains. His team’s latest innovation, showcased in Science Robotics, represents a significant leap forward in terms of flight performance and potential practical applications.
What Are These Robots?
The robots, weighing less than a paperclip, are designed to mimic the flight patterns of insects such as bees. Each robot features four units equipped with flapping wings powered by artificial muscles. These soft actuators are constructed from layers of elastomer (a flexible, rubber-like material that can stretch and return to its original shape) sandwiched between carbon nanotube electrodes, allowing the wings to beat at high frequencies.
What sets this newest version apart from earlier efforts is its durability and efficiency. For example, the previous models could only fly for about 10 seconds before succumbing to mechanical strain, whereas the revamped version can hover for over 1,000 seconds (nearly 17 minutes) without degrading its performance. This remarkable improvement stems from a complete overhaul of the robot’s wing and transmission design.
A New Standard in Robotic Agility and Precision
The latest version of the robot bug is not just durable, it’s also highly agile. For example, it can perform complex acrobatic manoeuvres, such as double aerial flips and body rolls, and trace remarkably specific flight paths with incredible precision. The scientists have even been able to make a swarm of the robot bugs spell out “M-I-T” mid-flight (rather like drone displays). These capabilities are underpinned by advanced control systems and a redesigned wing structure that reduces mechanical stress.
For example, as explained by Chen: “Compared to the old robot, we can now generate control torque three times larger than before, which is why we can do very sophisticated and very accurate path-finding flights.”
The new design also addresses a common issue in robotic insects, i.e. lift efficiency. By positioning the wings to avoid interference from one another, the researchers have managed to maximise their lift force, thereby allowing for faster and more stable flight.
Why Is This Development Important?
The implications of these advancements could be far-reaching. Artificial pollination could become a practical solution in vertical farming, a growing industry focused on producing food in stacked indoor environments. For example, produce such as leafy greens (like lettuce and spinach), herbs (such as basil and mint), strawberries, tomatoes, and microgreens are commonly grown in vertical farms. These high-tech farms, often located in urban areas, aim to reduce agriculture’s environmental footprint by using less land and water while eliminating the need for chemical pesticides.
As the researchers point out: “Farmers in the future could grow fruits and vegetables inside multilevel warehouses, boosting yields while mitigating some of agriculture’s harmful impacts on the environment.” Robotic pollinators may also help maintain some crop yields in areas where natural pollinators are scarce or absent (albeit on a much smaller scale than our natural pollinators).
Beyond agriculture, and perhaps more realistically, the robots could be used in tasks such as inspecting hard-to-reach areas in machinery or infrastructure. Their ability to navigate tight spaces and perform precise movements makes them ideal for jobs that are hazardous or impossible for humans.
Limitations and Challenges
While the robots’ capabilities are impressive, there are significant hurdles to overcome before they can be deployed outside the laboratory. Currently, the robots rely on external power sources and control systems, as their size makes it difficult to integrate onboard batteries and sensors. Miniaturising these components remains a priority for Chen’s team, who aim to create fully autonomous flying robots within the next three to five years.
Another challenge lies in replicating the sophisticated muscle control of real insects. Bees, for example, can adjust their wing movements with incredible precision, allowing them to navigate complex environments with ease. While the MIT robots have made strides in this area, they still fall short of matching the natural agility and adaptability of their biological counterparts.
The introduction of robotic pollinators raises significant ethical and environmental questions. Critics caution that prioritising the development of these artificial systems risks diverting attention and resources from safeguarding the intricate, incredible network of natural pollinators that already exists. This amazing and incredible system, composed of bees, butterflies, birds, and countless other species, functions seamlessly on a global scale, providing pollination services that are sustainable, efficient, and free. Attempting to replicate such a complex and self-sustaining mechanism with robots not only seems far-fetched but also highlights the irreplaceable value of the natural world. Instead of relying on technological substitutes, there is a growing call to double down on efforts to restore and maintain the habitats and populations of these vital creatures, ensuring the resilience of ecosystems and food systems for generations to come.
Also, even if these robots could conceivably be produced at scale, widespread deployment of robotic insects could have unforeseen ecological consequences, e.g., disrupting existing ecosystems or creating new dependencies on artificial technologies.
The Road Ahead
Despite these challenges, the potential benefits of robotic pollinators are undeniable. The MIT team is already planning the next phase of development, which includes extending flight durations to over 10,000 seconds and improving the robots’ ability to land and take off from flowers. They are also exploring ways to incorporate sensors and computing capabilities, which would enable the robots to navigate and operate autonomously in outdoor environments.
“This new robot platform is a major result from our group and leads to many exciting directions,” says Chen. “For example, incorporating sensors, batteries, and computing capabilities on this robot will be a central focus in the next three to five years.”
A New Frontier in Sustainable Agriculture?
As the world grapples with the twin challenges of feeding a growing population and preserving biodiversity, innovations like MIT’s robotic pollinators offer a glimpse of a more sustainable future. While they are unlikely to replace natural pollinators entirely, these tiny flying machines could play a crucial supporting role in modern agriculture, particularly in controlled environments like vertical farms.
For now, the dream of swarms of robotic insects buzzing through greenhouses and fields remains just that, i.e. a dream. But with continued research and development, these miniature marvels could soon become an integral part of the agricultural landscape, helping to secure food supplies while reducing environmental impact.
What Does This Mean For Your Organisation?
The development of insect-inspired robotic pollinators by MIT is undeniably a remarkable feat of engineering and a testament to human ingenuity. These tiny flying machines demonstrate the power of technology to address some of the challenges posed by a changing world, particularly the growing threats to natural pollinator populations. With their improved agility, durability, and precision, these robots could open up possibilities for innovation in agriculture, infrastructure inspection, and beyond. However, their role as a potential substitute for nature’s intricate systems invites both excitement and caution.
While the robots could potentially aid in controlled environments like vertical farms or in regions where pollinator populations are critically low, it is crucial to acknowledge their limitations. At present, these robots remain highly experimental, reliant on external power sources and laboratory settings. Even with future advancements, the idea of deploying robotic swarms as a comprehensive replacement for the natural pollination system remains, at best, an extraordinary technical and ecological challenge. Natural pollinators, such as bees and butterflies, represent an intricate balance of biological and environmental systems that has evolved over millennia. Their efficiency, adaptability, and symbiotic relationship with ecosystems are unmatched by any human-made device.
Also, the ethical and environmental implications of relying on robotic pollinators cannot be ignored. For example, opting for technological solutions risks sidelining critical efforts to restore and preserve natural habitats, which are vital not only for pollinators but for the biodiversity and ecosystems that underpin life on Earth. Investing in the conservation of bees, butterflies, and other pollinating species is not merely an ecological responsibility but a pragmatic strategy to ensure the sustainability of agriculture and food production for the long term.
This is not to say that robotic pollinators lack value. Their potential to complement natural systems, most likely in niche or controlled environments, could prove invaluable. For example, in vertical farming, where natural pollinators cannot operate, these robots could contribute to sustainable urban agriculture. Similarly, their ability to perform precise, controlled manoeuvres in hazardous or inaccessible locations might unlock applications beyond pollination, such as infrastructure inspection and disaster response.
However, the broader focus should remain on addressing the root causes of pollinator decline, i.e., pesticide usage, habitat destruction, and climate change. These systemic issues require global collaboration, robust policy frameworks, and widespread public engagement. The preservation of natural pollinators and their habitats should be a central pillar of sustainability efforts, with technology serving as a complementary tool rather than a wholesale replacement.
The advancements in robotic pollinators are a powerful demonstration of human creativity and problem-solving. They offer promising opportunities in specific scenarios, but they should not distract from the urgent need to protect and restore the ecosystems that sustain natural pollinators. By balancing innovation with conservation, we can work towards a future where technology supports, rather than substitutes, the natural processes that are essential to life on Earth.