Video Update : Using CoPilot in Excel
Have you used CoPilot within an Excel file yet? If not, here’s a quick video about how to get to grips with CoPilot directly within an Excel file, so the possibilities are endless…
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – Turbocharge Windows Search
Want to find your files in seconds? Get instant access to your Windows files, documents, and apps by enabling Enhanced Search Indexing. Here’s how:
For Windows 11:
– Go to Settings > Privacy & security > Searching Windows.
– Select the “Enhanced” option under “Find my files”.
For Windows 10:
– Go to Settings > Search > Searching Windows.
– Click on “Classic” and select “Enhanced” to enable Enhanced indexing.
Customising Search Locations:
To refine your search results and focus on the files and folders that matter most to you:
– Go to Settings > Privacy & security > Searching Windows (Windows 11) or Settings > Search > Searching Windows (Windows 10).
– Click on “Customise search locations” or “Advanced indexer settings”.
– Click “Modify” to add or remove indexed folders.
Employers Choose AI Over Gen Z
A new British Standards Institution report says managers are increasingly substituting AI for junior roles, reshaping early careers and raising concerns for the UK labour market.
The Study and Report
The analysis comes from the British Standards Institution’s new insight report, ‘Evolving Together: AI, Automation and Building the Skilled Workforce of the Future’. It surveyed more than 850 business leaders across eight countries, including the UK, and used AI tools to review 123 company annual reports to see how often themes such as automation, upskilling, and training appeared. The study set out to understand how employers are using AI, which roles are being affected, and what this means for workforce development and future talent pipelines.
What Employers Are Doing
The key finding of the report appears to be that employers are now actively testing AI before employing people. The report says that nearly a third of business leaders said their organisation explores an AI solution before considering a human hire. Two in five said AI is already helping them reduce their headcount, while a similar number reported that entry-level roles had already been reduced or cut as AI took on research and administrative work. Looking ahead, 43 per cent said they expect further reductions in junior roles over the next year. In the UK, 38 per cent of leaders expect to cut junior positions, and three quarters said AI is already helping reduce headcount.
The language appearing in company reports appears to tell a similar story. For example, the term “automation” appeared nearly seven times more often than “upskilling”, “training”, or “education”, suggesting that businesses are now prioritising cost reduction and efficiency over long-term workforce investment. Over half of those surveyed also said the benefits of implementing AI outweigh the disruption to jobs.
Why?
It seems that employers are framing AI as a route to productivity and competitiveness. For example, 61 per cent cited productivity and efficiency as a main reason for investing in AI, 49 per cent pointed to cost reduction, and 43 per cent said AI helps fill skills gaps. However, the BSI report notes that competitive pressure may be driving these decisions as much as actual evidence of success. Many businesses are keen not to appear behind their rivals, even if financial results are uncertain.
What It Means For Gen Z And Early Careers
For younger workers entering the job market, it looks as though the picture is becoming more challenging. Adzuna data shows that UK entry-level vacancies have fallen by about a third since late 2022, with such roles now representing a smaller share of all job postings. Also, Indeed has reported a one-third year-on-year fall in graduate listings, marking the toughest market since 2018. The BSI study captures the employer side of this trend, where a quarter of bosses believe all or most entry-level tasks could now be handled by AI.
BSI’s leaders warn about the long-term cost of this approach. “AI represents an enormous opportunity for businesses globally, but as they chase greater productivity and efficiency, we must not lose sight of the fact that it is ultimately people who power progress,” said Susan Taylor Martin, chief executive of BSI. She called for long-term workforce investment alongside AI spending. Kate Field, BSI’s global head of human and social sustainability, added that prioritising short-term productivity over early-career development risks weakening the skills pipeline and deepening generational inequality.
Signals From The Labour Market
The UK labour market itself has cooled through the summer. Official figures show unemployment at 4.7 per cent between May and July, a four-year high. Economists caution against linking this entirely to AI adoption, although the technology is clearly reshaping entry-level hiring.
International bodies are also monitoring exposure. For example, the International Monetary Fund estimates around 60 per cent of jobs in advanced economies could be affected by AI, with roughly half of these potentially seeing lower demand for human labour. The Organisation for Economic Co-operation and Development (OECD) has also found that about a third of vacancies are in occupations highly exposed to AI, with the UK near the top of that range. These findings support the idea that early-career, white-collar roles are among the most vulnerable to rapid automation.
Implications For Employers And Businesses
For companies, the short-term benefits are obvious. For example, AI can automate repetitive tasks, consolidate workflows, and reduce costs in areas such as administration, research, and reporting. However, the medium-term risk is quite significant. If firms eliminate entry-level positions faster than they develop new skills, they could face shortages of experienced managers and specialists later on. BSI’s analysis shows that larger companies are moving faster on headcount reduction than small and medium-sized enterprises (SMEs), but they are also more likely to have a formal AI learning and development programme. That leaves SMEs in a difficult position, potentially expected to train the next generation of workers while competing for scarce talent.
What About ROI?
Return on investment is another area of uncertainty. For example, IBM’s 2025 CEO Study reported that only a quarter of AI initiatives had actually delivered expected results in recent years, and an MIT-linked study this summer found that most enterprise generative AI projects produced no measurable effect on profit or efficiency. An EY survey of nearly a thousand large companies reached similar conclusions, finding that many experienced early financial losses due to compliance issues, inaccurate outputs, and operational disruption. These findings suggest that while firms are enthusiastic about AI, many are still learning how to achieve any real value from it.
Employees And The Economy
For workers, especially Gen Z, the decline in entry-level roles reduces opportunities to gain essential experience. That has implications for career progression, pay growth, and social mobility. The BSI findings also highlight sentiment among managers, more than half of whom said they feel lucky to have started their careers before AI became widespread. This fuels perceptions among younger people that they face a more precarious employment landscape. The Trades Union Congress has also reported that half of UK adults worry AI could alter or take their job, underlining growing anxiety around the technology’s impact on employment.
At the wider economic level, a balanced transition is crucial. For example, international studies suggest that AI can raise productivity if it’s paired with investment in human skills. The OECD links high AI exposure with rising demand for management, social, and digital capabilities, while the IMF stresses that policy and employer choices will determine whether AI adoption produces better jobs or simply less work. It should be noted that the direction is not inevitable, but depends on how businesses and governments respond.
Other Stakeholders
For AI providers, the BSI data signals strong short-term demand for automation tools, especially those aimed at streamlining office-based and knowledge roles. It also points to increasing scrutiny. Employers are demanding clearer evidence of ROI, and policymakers are watching workforce impacts closely. Some commentators, for example, are warning about inflated AI valuations, and the IMF has highlighted the risk of market concentration among a few large AI firms. For educators and training providers, the opportunity is equally clear. If businesses are automating junior roles, then building AI literacy and human-centred skills such as creativity, empathy, and collaboration into education and early careers becomes increasingly essential.
Challenges And Criticisms
Taking a step back, three key issues appear to stand out from all this:
1. An over-reliance on automation without parallel investment in upskilling risks hollowing out future leadership pipelines. The imbalance in corporate language, where automation dominates over training, suggests short-termism.
2. ROI from AI remains inconsistent. For example, surveys from IBM, MIT, and EY show that many organisations either struggle to capture financial gains or face early project losses, raising doubts about the business case for replacing human development with automation.
3. There is now a widening gap between large and small employers in their ability to offer AI-related training. That leaves SMEs carrying much of the responsibility for developing Gen Z talent while lacking the same resources as bigger corporations.
BSI’s leaders emphasise that an AI-enabled workforce still needs to be developed. The report concludes that “the future belongs to skills that machines can’t replicate—for example, creativity, empathy, and collaboration.” Businesses, it says, must evolve to nurture these human strengths alongside technical literacy if they want to remain competitive and sustainable.
Looking Ahead
Looking ahead, hiring trends at the entry level are likely to be the key measure. Job-board data through 2025 already shows fewer openings in several professional fields even as AI-related roles expand. Policy direction will also be crucial. The British Standards Institution and other regulators are expected to continue shaping frameworks for responsible AI adoption. Measuring productivity outcomes and workforce investment side by side will determine whether this phase of AI-driven restructuring delivers lasting value, or leaves a generation behind.
What Does This Mean For Your Business?
The findings in the report suggest that the next stage of AI adoption will test how well businesses balance efficiency with long-term workforce stability. Employers that continue cutting entry-level positions without replacing them with structured learning or graduate pathways could soon face internal skills gaps that limit growth. For UK businesses, this raises a strategic question about sustainability. For example, automation can reduce costs, but without a consistent flow of skilled recruits, firms may find themselves competing for an ever-smaller pool of experienced professionals, pushing up wages and weakening future competitiveness.
There are also wider economic implications to consider. A reduction in entry-level hiring may suppress social mobility and delay young workers’ transition into full employment, which in turn affects consumer spending and tax revenues. Economists have warned that productivity gains from AI will only materialise if human capital keeps pace with technology. For policymakers, the challenge will be encouraging responsible innovation while safeguarding the foundations of the labour market. The BSI’s call for long-term thinking reflects growing concern that the UK’s current AI strategy must be paired with investment in training and skills if the benefits are to be shared across society.
For AI companies, the trend creates both opportunity and risk. Demand for automation is strong, but expectations are rising. Businesses are beginning to scrutinise outcomes more closely and may demand clearer, measurable returns. Providers that can demonstrate reliability, data security, and real efficiency improvements will be best placed to maintain momentum once early enthusiasm fades. Education and training providers also stand to gain if they can help bridge the gap between technical capability and human development, ensuring that younger workers can work effectively with, rather than against, AI systems.
Beyond the headline story here, the more rounded message emerging from the BSI’s report, is that the path forward cannot rely solely on automation. Businesses, governments, and educators will need to work together to build a future workforce that complements AI rather than competes with it. Without that alignment, the short-term pursuit of productivity could come at the long-term expense of capability, resilience, and opportunity.
Support Ends But Hundreds of Millions Still on Windows 10
Hundreds of millions of computers are still running Windows 10 as Microsoft ends support on 14 October 2025, raising major concerns about cost, security, and the scale of the global upgrade still to come.
The Countdown to End of Support
Microsoft has confirmed that Windows 10 will reach the end of support on 14 October 2025. From that date, devices will still operate normally, but they will stop receiving free security and feature updates. Microsoft warns that “without continued software and security updates, your PC will be at a greater risk for viruses and malware.”
The change also affects Microsoft 365 applications, which will no longer be supported on Windows 10 after the same date. Security updates for Microsoft 365 will continue for three years, until 10 October 2028, to allow users time to transition safely.
The Scale
Despite years of preparation time, Windows 10 remains installed on a huge number of computers. For example, recent figures from StatCounter show it still powers around 40 per cent of all Windows PCs worldwide. Given that Microsoft previously said there were more than 1.4 billion active Windows devices globally, that leaves hundreds of millions still running Windows 10.
In the UK alone, consumer group Which? estimates that roughly 21 million people still own or use a Windows 10 computer. A September 2025 survey found that 26 per cent of these users plan to keep using it after support ends, even though that will leave their systems exposed to security risks and scams.
Businesses Still Rely on Windows 10
The same pattern is visible in the business sector. For example, industry analysts estimate that more than half a billion corporate PCs worldwide still run Windows 10, and around half of those will not be upgraded in time for the deadline.
A major reason appears to be hardware compatibility. For example, around one in five of these business systems reportedly fails to meet Windows 11’s stricter requirements, which include a modern CPU, Secure Boot, and a Trusted Platform Module (TPM) 2.0. Many older but still reliable machines simply do not qualify.
Why So Many Haven’t Upgraded
Three main factors appear to explain the slow migration, which are:
– Hardware requirements have left a large portion of older PCs stranded (as mentioned earlier). Windows 11 requires at least a compatible 64-bit processor, 4 GB of RAM, 64 GB of storage, UEFI with Secure Boot, and TPM 2.0. Even a capable Windows 10 machine from just a few years ago might fail one or more of these checks.
– Cost pressures have delayed hardware refreshes. For example, PC makers like Dell, Lenovo and HP have all reported slower enterprise replacement cycles in 2025 as buyers prioritise other investments. Some organisations have budgeted to pay for extended support instead of immediate upgrades.
– Upgrade complexity is a factor. For businesses, migration involves application testing, driver checks, and user training. For households and small firms, it often requires confidence and time they may not have.
What Happens After October?
From 15 October 2025 onwards, unsupported Windows 10 systems will receive no further free patches or fixes. This means that any new security vulnerabilities discovered after that date will remain open, creating a growing risk window. Cyber security specialists warn that unpatched operating systems are a prime target for ransomware and data-theft attacks.
Microsoft stresses that users can continue running Windows 10, but without updates, the risks will increase over time. Applications that depend on newer Windows features may also stop working correctly.
Paying for Extra Time
To bridge the gap, Microsoft is offering an Extended Security Updates (ESU) programme. For business customers, the cost is 61 US dollars per device for the first year, doubling each year after that to 122 dollars and then 244 dollars. The escalating cost is designed to encourage organisations to move to Windows 11 rather than rely indefinitely on paid protection.
For home users, Microsoft has made ESU more accessible. For example, consumers can enrol for one year of updates in three ways:
1. Free of charge if the PC is linked to a Microsoft account.
2. By redeeming Microsoft Rewards points.
3. By paying a one-time fee of 30 US dollars. Each licence covers up to ten devices.
Within the European Economic Area, Microsoft has agreed to offer a year of ESU without requiring additional sign-ups or conditions, following pressure from digital rights and consumer advocacy groups.
Upgrading Free to Windows 11
If the hardware is eligible, upgrading to Windows 11 is free. Users can check by opening Settings > Update & Security > Windows Update and selecting Check for updates. Systems running Windows 10 version 22H2 and meeting minimum specifications can install Windows 11 directly through Windows Update.
Microsoft lists the minimum requirements as:
– A compatible 64-bit CPU
– 4 GB RAM and 64 GB storage
– TPM 2.0 enabled
– Secure Boot capable
– A DirectX 12-compatible graphics card and display above 9 inches 720p
However, if a PC fails these requirements, the main options are to buy a new Windows 11 machine, enrol temporarily in ESU, or install an alternative operating system such as a lightweight Linux distribution to extend the device’s life.
The Case for Moving On
Microsoft has long been promoting Windows 11 as a “more modern, secure, and highly efficient” computing platform. This, Microsoft says, is because it enforces stronger defaults such as Virtualisation-Based Security, Credential Guard, and Secure Boot, all designed to reduce ransomware and firmware-level attacks. The newer OS also integrates with Microsoft Copilot and other AI-assisted features that rely on modern chipsets.
Upgrading Early (Which Isn’t That Early Now)
For businesses, upgrading early (i.e., before the deadline), may reduce compliance and insurance risks, since running unsupported software can breach some cyber security frameworks and policies. It may also help IT teams adopt newer management tools, identity controls, and endpoint protection frameworks built for Windows 11.
Potential Problems When Upgrading
That said, upgrades are not always seamless and painless. For example, older peripherals and specialist software may lack updated drivers or compatibility support. Also, systems running older BIOS configurations may need to switch to UEFI before enabling Secure Boot. Without a full backup, there is always a risk of data loss or disruption during installation.
Microsoft advises users to back up data first using Windows Backup or OneDrive, and many IT departments are rolling out pilot migrations to test device readiness before full deployment.
Criticism and Environmental Concerns
Consumer and environmental groups have criticised Microsoft for enforcing such strict hardware requirements to accommodate Windows 11. For example, campaigners argue this could prematurely render millions of otherwise usable PCs obsolete, contributing to global e-waste and unnecessary cost for consumers and public bodies.
Advocacy groups in Europe and the United States have also urged Microsoft to extend Windows 10’s life for businesses, warning that so many unsupported devices could become a major security liability. Some suggest that paid ESUs are a temporary “band-aid” that addresses symptoms but not the root cause of an accelerated replacement cycle.
Industry observers agree that the scale of this transition is unusually large. It’s important to realise that Windows 10 became one of the most widely adopted versions in Microsoft’s history, and is used by corporations, schools, and government agencies alike. Replacing or upgrading hundreds of millions of PCs is therefore an expensive and time-consuming global task.
What Should Users Do Now?
Microsoft’s advice to users is straightforward, i.e. check if your PC can run Windows 11. If it can, upgrade now while the process remains free. If it cannot, enrol in the Extended Security Updates programme to stay protected while planning your next move.
For households, that may mean replacing an ageing device. For businesses, it may require budgeting for large-scale hardware refresh programmes or short-term ESU coverage. Either way, leaving systems unsupported is now the biggest risk of all.
What Does This Mean For Your Business?
The reality now facing users, organisations, and regulators is that the Windows 10 era is ending far faster than many are ready for. Hundreds of millions of devices still depend on an operating system that will soon no longer receive free security support, and while Microsoft’s paid options may buy some time, they do not remove the core problem of ageing hardware and an uneven global upgrade path. For UK businesses, this situation brings practical as well as strategic implications. For example, firms that continue using unsupported machines risk breaching cyber security frameworks, invalidating insurance policies, or exposing customer data to avoidable threats. However, for many, replacing entire fleets of computers in a single financial year is neither easy nor affordable.
This balancing act is also testing government departments and public services that rely on long-life IT infrastructure. Large numbers of public sector computers, from hospitals to local authorities, are still on Windows 10. Extending their life through paid security updates may help maintain continuity, but costs will quickly rise. In a climate of tight budgets, these decisions affect everything from digital transformation plans to cyber resilience strategies.
For Microsoft, the move signals a push toward a more modern, secure, and AI-ready ecosystem, aligning with its wider Copilot vision. However, the backlash from environmental and consumer groups highlights a growing tension between technological progress and sustainability. Millions of still-functional computers risk becoming e-waste before their time, raising difficult questions about repairability and responsible upgrade paths.
The next year will therefore be decisive. Businesses and individual users that act early will avoid disruption and keep their systems compliant and secure. Those that wait may find the costs of inaction climbing fast, whether through higher ESU fees or exposure to attack. The broader message here is that the end of Windows 10 is more than a software milestone. It is a reminder that long-term planning, sustainable procurement, and realistic upgrade cycles are now essential parts of digital risk management for every organisation.
Google Backs ‘Supermemory’
A 20-year-old founder from Mumbai has attracted backing from senior Google figures for a new AI startup designed to help large language models remember what users tell them.
Supermemory
Supermemory, founded by developer Dhravya Shah, is building what he calls a “universal memory layer” for artificial intelligence, which is a tool that allows AI apps to retain and recall information across different sessions.
Google Investor
The company has now raised around $3 million in seed funding, supported by investors including Google’s Chief Scientist Jeff Dean, Cloudflare’s Chief Technology Officer Dane Knecht, and executives from OpenAI and Meta.
Tackling One Of AI’s Hardest Problems
For all their sophistication, it seems that current AI systems still have remarkably short memories. For example, each time a user starts a new conversation, most models forget the details of previous ones. Even with growing “context windows” (i.e. the measure of how much data a model can process at once), the ability to sustain meaningful long-term context remains limited.
Supermemory, therefore, is trying to fix this problem. However, rather than rebuilding models, it acts as an intelligent memory system that connects to existing AI tools. For example, the platform analyses a user’s files, chats, emails, notes and other unstructured data, identifies key relationships and facts, and then turns that information into a kind of knowledge graph. When an AI system queries the memory layer, it can instantly access relevant past context, making the interaction more accurate and personal.
Shah describes the concept as giving AI “self-learning context about your users that is interoperable with any model.” He says this is where the next wave of AI innovation will focus: not on larger models, but on personalised, context-rich systems that actually remember.
From Mumbai To Silicon Valley
Originally from Mumbai, Shah began programming as a teenager, building small web apps and chatbots. One early creation, a bot that turned tweets into neatly formatted screenshots, was acquired by the social media tool Hypefury. The sale gave him early experience of product building and enough financial headroom to pursue further projects.
He was preparing for India’s elite engineering entrance exams when he decided instead to move to the United States and study computer science at Arizona State University. There, he challenged himself to create a new app every week for 40 weeks. During one of those weeks, he built an experimental tool that let users chat with their Twitter bookmarks. The concept later evolved into Supermemory.
Internship at Cloudflare
In 2024, Shah secured an internship at Cloudflare, working on AI and infrastructure projects, before joining the company full-time in a developer relations role. Mentors there encouraged him to turn Supermemory into a serious product, leading him to leave university and focus on it full-time.
“I realised the infrastructure for memory in AI simply didn’t exist,” he explained in a company blog post. “We built our own vector database, content parser and extractor, all designed to make memory scalable, flexible and fast, like the human brain.”
How It Works
In terms of how the Supermemory platform actually works, it can ingest a wide range of content types, including documents, messages, PDFs, and data from connected services such as Google Drive, OneDrive, and Notion. Users can add “memories” manually, via a chatbot or a Chrome extension, or allow apps to sync data automatically.
Once uploaded, the system extracts insights from the content and indexes them in a structure that AI models can query efficiently. It can then retrieve context across long timespans (from emails written months earlier to notes saved in other tools) allowing different AI agents to maintain a coherent understanding of users and projects.
Shah claims the company’s purpose-built infrastructure gives it a technical edge. The system has been benchmarked for low latency, meaning responses arrive quickly even at scale. This speed, he argues, will be key to making memory-driven AI practical in everyday applications.
As Shah says, “Our core strength is extracting insights from any kind of unstructured data and giving apps more context about users,” and that “As we work across multimodal data, our solution can support everything from email clients to video editors.”
The Investors
Supermemory’s $3 million seed round was led by Susa Ventures, Browder Capital, and SF1.vc. It also drew high-profile individual investors including (notably) Google AI’s Jeff Dean, DeepMind product manager Logan Kilpatrick, Cloudflare CTO Dane Knecht, and Sentry founder David Cramer.
Joshua Browder, the founder of legal automation firm DoNotPay, invested through his personal fund, Browder Capital. “What struck me was how quickly Dhravya moves and builds things,” Browder said publicly. “That prompted me to invest in him.”
Early Customers
The startup already lists several enterprise and developer customers. For example, these include AI productivity tool Cluely, AI video editor Montra, search platform Scira, Composio’s multi-agent tool Rube, and the real estate data firm Rets. One robotics company is reportedly using Supermemory to help machines retain visual memories captured by onboard cameras, which is an example of how the technology could extend beyond software.
While the app has some consumer-facing tools for note-taking and bookmarking, the broader ambition is to make Supermemory the default memory engine for AI agents, providing a universal layer that different applications can plug into.
Not The Only One
Several other startups are also exploring long-term AI memory. For example, companies such as Letta, Mem0 and Memories.ai are developing their own frameworks for building memory layers into AI systems. Some target specific use cases such as customer support or industrial monitoring, while others focus on consumer productivity.
What Makes Supermemory So Different?
Shah argues Supermemory’s technical foundations are its main differentiators. For example, by building its own underlying infrastructure, rather than relying on third-party databases, the company claims to offer faster and more reliable performance than rivals. Early customers reportedly send billions of tokens of data through the platform each week.
Analysts have noted that as AI assistants become embedded across daily workflows, effective memory systems will be essential to making them useful. Without it, users must constantly repeat information or re-train models for every new task. The growing number of investors and engineers now entering the “AI memory” space reflects that urgency.
From Side Project To Infrastructure Company
It seems, therefore, that what began as a teenager’s personal productivity experiment has quickly become a serious infrastructure business. The original open-source version of Supermemory attracted over 50,000 users and 10,000 stars on GitHub, making it one of the fastest-growing projects of its kind in 2024. That early traction revealed the technical limits of existing tools and gave Shah the confidence to rebuild it from the ground up.
The company now describes its product as “interoperable with any model” and capable of scaling across billions of data points. It is hiring engineers, researchers and product designers to continue improving its platform.
Shah, who recently turned 20, says he sees memory as the next defining challenge in AI. “We have incredibly intelligent models,” he wrote on his blog, “but without memory, they can’t truly understand or personalise for the people they serve.”
What Does This Mean For Your Business?
The growing interest in memory infrastructure highlights how the next advances in AI will not come solely from bigger models, but from systems that can learn and recall over time. Supermemory’s approach to context retention gives developers and enterprises a practical route towards that goal. For AI to be genuinely useful across sectors such as healthcare, education and business operations, the ability to remember earlier inputs securely and accurately will be critical. This is the gap Shah’s technology is aiming to close, and its progress is already attracting serious attention from investors and other AI developers.
For UK businesses, the implications could be significant. For example, many organisations are now experimenting with generative AI tools for writing, analysis, and customer engagement, yet find themselves limited by the absence of memory between sessions. A reliable layer that provides long-term contextual understanding, therefore, could make those tools far more effective, whether in automating reports, managing client communications or maintaining project continuity. If Supermemory delivers the speed and scalability it claims, it could simplify how businesses integrate AI into daily workflows without constantly retraining or re-prompting systems.
There are also questions that the technology community will need to address. Any system designed to ingest and store personal or corporate data at scale will face scrutiny over privacy, compliance and data security. How Supermemory and its competitors handle that responsibility will help define the credibility of this emerging market. Investors appear confident that Shah and his team are aware of those challenges, and that their focus on infrastructure gives them a technical edge.
For now, Supermemory’s rapid evolution from side project to venture-backed platform shows how quickly new layers of the AI ecosystem are forming. It is a story about a young founder spotting one of the field’s most persistent gaps and convincing some of the world’s leading technologists that he has a credible solution. Whether the company can translate that promise into long-term commercial success remains to be seen, but its emergence signals a clear direction of travel for the next stage of AI development, i.e. towards systems that don’t just process information, but remember it.
Lab-Grown Human Brains Power ‘Wetware’
Scientists are building experimental computers from tiny lab-grown clusters of human neurons with the aim of creating ultra-efficient “wetware” that can learn, adapt and run AI-type tasks using a fraction of today’s energy.
What Are These “Mini Brains”?
In this case, “mini brains” are brain organoids, which are small three-dimensional clusters of living human neurons and support cells grown from stem cells. They are not conscious or comparable to a human brain, but they share the same biological building blocks and can produce electrical activity that researchers can stimulate and record. Researchers at Johns Hopkins University (in Baltimore, Maryland, United States) refer to this emerging field as “organoid intelligence”, a term that captures both the scientific ambition and the ethical caution surrounding biocomputing.
Who Is Making Them, When and Where?
A Swiss team at FinalSpark has already built a remote “Neuroplatform” that lets universities run experiments on organoids over the internet. Their lab in Vevey, on the shores of Lake Geneva, grows these tiny clusters of neurons, places them on micro-electrode arrays, and exposes them to controlled electrical patterns so that researchers can study how they learn and respond to stimuli.
The company’s organoids can currently survive for several months, allowing long-term experiments on neural activity, memory and energy efficiency. The stated goal is to create “living servers” capable of performing certain computing tasks while using only a fraction of the power consumed by traditional silicon hardware.
FinalSpark’s published data describes organoid lifetimes exceeding 100 days, using an air–liquid interface and eight electrodes per spheroid. This design allows remote electrophysiology at scale, giving researchers in other countries access to living neuron cultures without needing their own biocomputing laboratories.
Others Doing The Same Thing
FinalSpark is not the only company experimenting with this organoid idea. For example, in Australia (and the UK), another organisation that is creating ‘brains’ for use in computing is Cortical Labs and bit.bio who have collaborated on “CL1”, a biological computer built from layers of human neurons grown on silicon. Their earlier “DishBrain” system showed neurons learning to play Pong, with findings published in Neuron in 2022. The company has since expanded its research to Cambridge, where it is developing biocomputing hardware that can be used by other organisations and universities to explore how living cells process information.
Also, Chinese research groups, including teams at Tianjin University and the Southern University of Science and Technology, have developed “MetaBOC”, an open-source biocomputer where organoid-on-chip systems learned to control a small robot. The demonstration showed a feedback loop between neural activity and physical motion, indicating how living tissue can process input and output in real time.
How The Technology Works
These so-called “wetware” experiments combine cell biology with digital engineering. For example, scientists create stem cells from human skin cells, coax them into neurons and glial cells, then culture them as spherical organoids about one or two millimetres wide. These are placed on micro-electrode arrays so that electrical patterns can be delivered and responses recorded – a bit like a miniature EEG in reverse.
FinalSpark’s system uses an air-liquid interface to keep organoids alive while allowing the electrodes to connect directly. Each unit can be accessed remotely by researchers, who send stimulation patterns and record how the organoids respond. However, the biological constraints are significant. For example, as Professor Simon Schultz, Director of Neurotechnology at Imperial College London, points out, organoids lack blood vessels, which limits their size and longevity. The challenge, therefore, is to keep the cells nourished and functioning consistently over time.
Why Do Scientists Want Wetware?
The human brain remains nature’s most efficient computer. It consumes only around 20 watts of power yet performs continuous learning, pattern recognition and reasoning far beyond what silicon hardware can do efficiently. Traditional computing architectures are fast and precise, but they burn huge amounts of energy when trying to emulate the brain’s parallel, adaptive processing.
That contrast has driven research into the wetware idea. For example, by using real neurons rather than digital simulations, scientists hope to create systems that can perform complex, adaptive tasks using a fraction of the energy. Johns Hopkins researchers have suggested that organoid-based computing could eventually produce “faster, more efficient and more powerful” systems that complement rather than replace silicon.
Cortical Labs’ Pong-playing experiment offers an early example of what living neurons can do. About 800,000 cells learned to improve their gameplay when given feedback, demonstrating a basic form of learning through trial and error. While this is a long way from human-level intelligence, it proves that even small neural cultures can process feedback and adjust behaviour.
What They Can Do Today
At present, wetware systems can only really respond to simple tasks under laboratory conditions. FinalSpark’s organoids are repeatedly stimulated with electrical signals, and researchers measure how their responses change over time, i.e. an early form of digital “training”. Cortical Labs has shown that neuron cultures can actually learn predictable patterns through feedback, while Chinese researchers have achieved basic robotic control via organoid-on-chip platforms.
These are all essentially small-scale experiments, but they mark progress from merely observing brain activity to actively using biological learning for computation. The next step is to scale and stabilise these systems so they can perform consistent, useful work.
More In The Race
Beyond FinalSpark and Cortical Labs, several major academic centres are also involved in these wetware experiments. For example, Johns Hopkins University coordinates an international research community focused on “organoid intelligence”, and in 2023 published the Baltimore Declaration, which is an ethical framework guiding responsible development of biocomputers and urging early discussions about potential consciousness and welfare.
The CL1 project in Cambridge, for example, aims to make wetware commercially accessible, while Chinese laboratories continue refining biocomputer-on-chip hardware. These efforts show that the field is moving away from isolated prototypes towards shared platforms that other scientists can use.
Benefits Over “Normal” Computers
Brains excel at handling uncertain information and learning from minimal examples, something current AI systems struggle to replicate efficiently. Silicon-based chips are powerful but energy-hungry, while neurons operate using chemical and electrical signalling at extremely low energy costs.
Wetware computing could, therefore, one day make certain types of AI and modelling tasks far cheaper and more sustainable to run. For example, the technology could also improve medical research by allowing scientists to study disease or drug effects on human cells without animal testing. Johns Hopkins researchers have said that organoid computing could “advance disease modelling and reduce animal use” alongside powering future AI systems.
Competitors, The Industry and Users
For developers like FinalSpark, the short-term business model is basically “research as a service”. This means that universities can log into FinalSpark’s Neuroplatform to access organoids remotely, run experiments and collect data without needing their own biocomputing facilities. The company says its neurons are already shared across nine universities and accessed 24 hours a day.
For competitors, the emergence of wetware is another pressure point in the race for energy-efficient computing. Chipmakers such as Intel and Nvidia are already developing neuromorphic processors that mimic brain structures, while wetware takes that concept further by using real neurons. Although biological computers are not ready to replace silicon, their development highlights how efficiency, adaptability and sustainability are becoming strategic priorities in computing.
For businesses, the most immediate relevance is energy and research access. For example, if wetware systems can eventually handle niche AI workloads or data modelling at a fraction of the power, that could transform data centre economics. Remote access models like FinalSpark’s also point to new ways of conducting research collaborations, where biological experiments are run digitally across borders.
Investors, regulators and policymakers are also likely to be watching the whole wetware idea closely. It’s worth noting that the Baltimore Declaration provides early guidance on consent, provenance, transparency and the monitoring of any potential signs of sentience, giving regulators a starting framework as the technology moves closer to commercial use.
Challenges and Criticisms
Given the unique nature and newness of this type of development, there are, of course, plenty of challenges ahead. Scaling actually remains the greatest technical challenge at the moment. For example, without blood vessels or advanced support systems, organoids struggle to survive long enough or grow large enough to carry out any complex computations. Their behaviour can also vary as the living tissue changes over time, making reproducibility difficult. Researchers are experimenting with microfluidics and electrode “caps” that can wrap around 3D organoids to improve stability and signal capture.
The ethical debate is an obvious (and equally active) one in this case. The Baltimore Declaration warns researchers to be alert to any sign of emerging consciousness and to treat wetware experiments with the same care given to animal studies. Scientists stress that today’s organoids are non-sentient, but agree that as complexity increases, ethical oversight must keep pace.
Also, given how exciting and futuristic the idea sounds, expectations need managing. For example, although Pong-playing neurons and robotic demonstrations are valuable proofs of concept, they are not evidence of general intelligence. Turning these small experiments into reliable, standardised systems that can be trained, paused and restarted like software will take years. Even supporters of the field acknowledge that it remains in its infancy, with commercial value likely to emerge only once lifespans, interfaces and quality controls improve significantly. “Organoids do not have blood vessels… this is the biggest ongoing challenge,” said Professor Simon Schultz of Imperial College London, highlighting the biological limits that must be overcome before wetware computing can scale.
Cortical Labs’ researchers have said that their neurons could learn to play Pong in minutes, showing adaptive behaviour but also underlining how early the technology remains. Johns Hopkins scientists maintain that wetware “should complement, not replace, silicon AI”, a sentiment echoed across most of the research community.
FinalSpark, Cortical Labs, Johns Hopkins University and Chinese teams behind MetaBOC are currently the main players to watch. Each is pursuing different goals, from remote-access research platforms to robotic control systems, but together they are actually defining what may become a new category of living computation, albeit a bit creepy for many people.
What Does This Mean For Your Business?
Biocomputing is now moving from concept to reality, and the idea of machines powered by living cells is no longer confined to science fiction. In laboratories, clusters of human neurons are already showing the ability to learn, respond and adapt, marking a genuine new direction in how computing power might be created and used. The researchers behind this work remain cautious, but their early results suggest that living tissue could soon sit alongside silicon as part of the world’s computing infrastructure.
The potential benefits are clear. For example, energy efficiency has become a pressing issue for every industry that depends on artificial intelligence, from cloud computing to data analytics. If biocomputers can perform learning and problem-solving tasks using a fraction of the power consumed by conventional hardware, the impact on cost, sustainability and data centre design could be significant. For UK businesses, this could eventually mean access to more energy-efficient AI systems and new opportunities in research, innovation and green technology investment.
Beyond business efficiency, there are also clear research and healthcare implications. Pharmaceutical and biotech companies could use these systems to model how drugs affect human cells with far greater accuracy, reducing reliance on animal testing. Universities could gain new tools for neuroscience, while technology firms might develop adaptive systems that learn directly from biological responses rather than pre-programmed rules. For investors and policymakers, this blend of biology and computing presents both an opportunity to lead and a responsibility to ensure strict ethical oversight.
However, the barriers are as significant as the promise. For example, keeping organoids alive, stable and reproducible remains difficult, and each culture behaves differently over time. Also, ethical questions are becoming increasingly important too, with scientists and regulators needing to ensure that no experiment risks creating self-awareness or distress in living tissue. Governments will also need to consider how existing AI and data laws apply to systems that are, in part, alive.
For now, biocomputing remains a niche research field, but it is advancing quickly and forcing people to rethink what the word “computer” could mean. Whether it becomes a practical alternative to silicon or stays a scientific tool will depend on how successfully the technical and ethical challenges are managed. What is certain is that the next stage of computing will not just be faster or smaller, but it may also be alive.