Google Backs ‘Supermemory’
A 20-year-old founder from Mumbai has attracted backing from senior Google figures for a new AI startup designed to help large language models remember what users tell them.
Supermemory
Supermemory, founded by developer Dhravya Shah, is building what he calls a “universal memory layer” for artificial intelligence, which is a tool that allows AI apps to retain and recall information across different sessions.
Google Investor
The company has now raised around $3 million in seed funding, supported by investors including Google’s Chief Scientist Jeff Dean, Cloudflare’s Chief Technology Officer Dane Knecht, and executives from OpenAI and Meta.
Tackling One Of AI’s Hardest Problems
For all their sophistication, it seems that current AI systems still have remarkably short memories. For example, each time a user starts a new conversation, most models forget the details of previous ones. Even with growing “context windows” (i.e. the measure of how much data a model can process at once), the ability to sustain meaningful long-term context remains limited.
Supermemory, therefore, is trying to fix this problem. However, rather than rebuilding models, it acts as an intelligent memory system that connects to existing AI tools. For example, the platform analyses a user’s files, chats, emails, notes and other unstructured data, identifies key relationships and facts, and then turns that information into a kind of knowledge graph. When an AI system queries the memory layer, it can instantly access relevant past context, making the interaction more accurate and personal.
Shah describes the concept as giving AI “self-learning context about your users that is interoperable with any model.” He says this is where the next wave of AI innovation will focus: not on larger models, but on personalised, context-rich systems that actually remember.
From Mumbai To Silicon Valley
Originally from Mumbai, Shah began programming as a teenager, building small web apps and chatbots. One early creation, a bot that turned tweets into neatly formatted screenshots, was acquired by the social media tool Hypefury. The sale gave him early experience of product building and enough financial headroom to pursue further projects.
He was preparing for India’s elite engineering entrance exams when he decided instead to move to the United States and study computer science at Arizona State University. There, he challenged himself to create a new app every week for 40 weeks. During one of those weeks, he built an experimental tool that let users chat with their Twitter bookmarks. The concept later evolved into Supermemory.
Internship at Cloudflare
In 2024, Shah secured an internship at Cloudflare, working on AI and infrastructure projects, before joining the company full-time in a developer relations role. Mentors there encouraged him to turn Supermemory into a serious product, leading him to leave university and focus on it full-time.
“I realised the infrastructure for memory in AI simply didn’t exist,” he explained in a company blog post. “We built our own vector database, content parser and extractor, all designed to make memory scalable, flexible and fast, like the human brain.”
How It Works
In terms of how the Supermemory platform actually works, it can ingest a wide range of content types, including documents, messages, PDFs, and data from connected services such as Google Drive, OneDrive, and Notion. Users can add “memories” manually, via a chatbot or a Chrome extension, or allow apps to sync data automatically.
Once uploaded, the system extracts insights from the content and indexes them in a structure that AI models can query efficiently. It can then retrieve context across long timespans (from emails written months earlier to notes saved in other tools) allowing different AI agents to maintain a coherent understanding of users and projects.
Shah claims the company’s purpose-built infrastructure gives it a technical edge. The system has been benchmarked for low latency, meaning responses arrive quickly even at scale. This speed, he argues, will be key to making memory-driven AI practical in everyday applications.
As Shah says, “Our core strength is extracting insights from any kind of unstructured data and giving apps more context about users,” and that “As we work across multimodal data, our solution can support everything from email clients to video editors.”
The Investors
Supermemory’s $3 million seed round was led by Susa Ventures, Browder Capital, and SF1.vc. It also drew high-profile individual investors including (notably) Google AI’s Jeff Dean, DeepMind product manager Logan Kilpatrick, Cloudflare CTO Dane Knecht, and Sentry founder David Cramer.
Joshua Browder, the founder of legal automation firm DoNotPay, invested through his personal fund, Browder Capital. “What struck me was how quickly Dhravya moves and builds things,” Browder said publicly. “That prompted me to invest in him.”
Early Customers
The startup already lists several enterprise and developer customers. For example, these include AI productivity tool Cluely, AI video editor Montra, search platform Scira, Composio’s multi-agent tool Rube, and the real estate data firm Rets. One robotics company is reportedly using Supermemory to help machines retain visual memories captured by onboard cameras, which is an example of how the technology could extend beyond software.
While the app has some consumer-facing tools for note-taking and bookmarking, the broader ambition is to make Supermemory the default memory engine for AI agents, providing a universal layer that different applications can plug into.
Not The Only One
Several other startups are also exploring long-term AI memory. For example, companies such as Letta, Mem0 and Memories.ai are developing their own frameworks for building memory layers into AI systems. Some target specific use cases such as customer support or industrial monitoring, while others focus on consumer productivity.
What Makes Supermemory So Different?
Shah argues Supermemory’s technical foundations are its main differentiators. For example, by building its own underlying infrastructure, rather than relying on third-party databases, the company claims to offer faster and more reliable performance than rivals. Early customers reportedly send billions of tokens of data through the platform each week.
Analysts have noted that as AI assistants become embedded across daily workflows, effective memory systems will be essential to making them useful. Without it, users must constantly repeat information or re-train models for every new task. The growing number of investors and engineers now entering the “AI memory” space reflects that urgency.
From Side Project To Infrastructure Company
It seems, therefore, that what began as a teenager’s personal productivity experiment has quickly become a serious infrastructure business. The original open-source version of Supermemory attracted over 50,000 users and 10,000 stars on GitHub, making it one of the fastest-growing projects of its kind in 2024. That early traction revealed the technical limits of existing tools and gave Shah the confidence to rebuild it from the ground up.
The company now describes its product as “interoperable with any model” and capable of scaling across billions of data points. It is hiring engineers, researchers and product designers to continue improving its platform.
Shah, who recently turned 20, says he sees memory as the next defining challenge in AI. “We have incredibly intelligent models,” he wrote on his blog, “but without memory, they can’t truly understand or personalise for the people they serve.”
What Does This Mean For Your Business?
The growing interest in memory infrastructure highlights how the next advances in AI will not come solely from bigger models, but from systems that can learn and recall over time. Supermemory’s approach to context retention gives developers and enterprises a practical route towards that goal. For AI to be genuinely useful across sectors such as healthcare, education and business operations, the ability to remember earlier inputs securely and accurately will be critical. This is the gap Shah’s technology is aiming to close, and its progress is already attracting serious attention from investors and other AI developers.
For UK businesses, the implications could be significant. For example, many organisations are now experimenting with generative AI tools for writing, analysis, and customer engagement, yet find themselves limited by the absence of memory between sessions. A reliable layer that provides long-term contextual understanding, therefore, could make those tools far more effective, whether in automating reports, managing client communications or maintaining project continuity. If Supermemory delivers the speed and scalability it claims, it could simplify how businesses integrate AI into daily workflows without constantly retraining or re-prompting systems.
There are also questions that the technology community will need to address. Any system designed to ingest and store personal or corporate data at scale will face scrutiny over privacy, compliance and data security. How Supermemory and its competitors handle that responsibility will help define the credibility of this emerging market. Investors appear confident that Shah and his team are aware of those challenges, and that their focus on infrastructure gives them a technical edge.
For now, Supermemory’s rapid evolution from side project to venture-backed platform shows how quickly new layers of the AI ecosystem are forming. It is a story about a young founder spotting one of the field’s most persistent gaps and convincing some of the world’s leading technologists that he has a credible solution. Whether the company can translate that promise into long-term commercial success remains to be seen, but its emergence signals a clear direction of travel for the next stage of AI development, i.e. towards systems that don’t just process information, but remember it.
Lab-Grown Human Brains Power ‘Wetware’
Scientists are building experimental computers from tiny lab-grown clusters of human neurons with the aim of creating ultra-efficient “wetware” that can learn, adapt and run AI-type tasks using a fraction of today’s energy.
What Are These “Mini Brains”?
In this case, “mini brains” are brain organoids, which are small three-dimensional clusters of living human neurons and support cells grown from stem cells. They are not conscious or comparable to a human brain, but they share the same biological building blocks and can produce electrical activity that researchers can stimulate and record. Researchers at Johns Hopkins University (in Baltimore, Maryland, United States) refer to this emerging field as “organoid intelligence”, a term that captures both the scientific ambition and the ethical caution surrounding biocomputing.
Who Is Making Them, When and Where?
A Swiss team at FinalSpark has already built a remote “Neuroplatform” that lets universities run experiments on organoids over the internet. Their lab in Vevey, on the shores of Lake Geneva, grows these tiny clusters of neurons, places them on micro-electrode arrays, and exposes them to controlled electrical patterns so that researchers can study how they learn and respond to stimuli.
The company’s organoids can currently survive for several months, allowing long-term experiments on neural activity, memory and energy efficiency. The stated goal is to create “living servers” capable of performing certain computing tasks while using only a fraction of the power consumed by traditional silicon hardware.
FinalSpark’s published data describes organoid lifetimes exceeding 100 days, using an air–liquid interface and eight electrodes per spheroid. This design allows remote electrophysiology at scale, giving researchers in other countries access to living neuron cultures without needing their own biocomputing laboratories.
Others Doing The Same Thing
FinalSpark is not the only company experimenting with this organoid idea. For example, in Australia (and the UK), another organisation that is creating ‘brains’ for use in computing is Cortical Labs and bit.bio who have collaborated on “CL1”, a biological computer built from layers of human neurons grown on silicon. Their earlier “DishBrain” system showed neurons learning to play Pong, with findings published in Neuron in 2022. The company has since expanded its research to Cambridge, where it is developing biocomputing hardware that can be used by other organisations and universities to explore how living cells process information.
Also, Chinese research groups, including teams at Tianjin University and the Southern University of Science and Technology, have developed “MetaBOC”, an open-source biocomputer where organoid-on-chip systems learned to control a small robot. The demonstration showed a feedback loop between neural activity and physical motion, indicating how living tissue can process input and output in real time.
How The Technology Works
These so-called “wetware” experiments combine cell biology with digital engineering. For example, scientists create stem cells from human skin cells, coax them into neurons and glial cells, then culture them as spherical organoids about one or two millimetres wide. These are placed on micro-electrode arrays so that electrical patterns can be delivered and responses recorded – a bit like a miniature EEG in reverse.
FinalSpark’s system uses an air-liquid interface to keep organoids alive while allowing the electrodes to connect directly. Each unit can be accessed remotely by researchers, who send stimulation patterns and record how the organoids respond. However, the biological constraints are significant. For example, as Professor Simon Schultz, Director of Neurotechnology at Imperial College London, points out, organoids lack blood vessels, which limits their size and longevity. The challenge, therefore, is to keep the cells nourished and functioning consistently over time.
Why Do Scientists Want Wetware?
The human brain remains nature’s most efficient computer. It consumes only around 20 watts of power yet performs continuous learning, pattern recognition and reasoning far beyond what silicon hardware can do efficiently. Traditional computing architectures are fast and precise, but they burn huge amounts of energy when trying to emulate the brain’s parallel, adaptive processing.
That contrast has driven research into the wetware idea. For example, by using real neurons rather than digital simulations, scientists hope to create systems that can perform complex, adaptive tasks using a fraction of the energy. Johns Hopkins researchers have suggested that organoid-based computing could eventually produce “faster, more efficient and more powerful” systems that complement rather than replace silicon.
Cortical Labs’ Pong-playing experiment offers an early example of what living neurons can do. About 800,000 cells learned to improve their gameplay when given feedback, demonstrating a basic form of learning through trial and error. While this is a long way from human-level intelligence, it proves that even small neural cultures can process feedback and adjust behaviour.
What They Can Do Today
At present, wetware systems can only really respond to simple tasks under laboratory conditions. FinalSpark’s organoids are repeatedly stimulated with electrical signals, and researchers measure how their responses change over time, i.e. an early form of digital “training”. Cortical Labs has shown that neuron cultures can actually learn predictable patterns through feedback, while Chinese researchers have achieved basic robotic control via organoid-on-chip platforms.
These are all essentially small-scale experiments, but they mark progress from merely observing brain activity to actively using biological learning for computation. The next step is to scale and stabilise these systems so they can perform consistent, useful work.
More In The Race
Beyond FinalSpark and Cortical Labs, several major academic centres are also involved in these wetware experiments. For example, Johns Hopkins University coordinates an international research community focused on “organoid intelligence”, and in 2023 published the Baltimore Declaration, which is an ethical framework guiding responsible development of biocomputers and urging early discussions about potential consciousness and welfare.
The CL1 project in Cambridge, for example, aims to make wetware commercially accessible, while Chinese laboratories continue refining biocomputer-on-chip hardware. These efforts show that the field is moving away from isolated prototypes towards shared platforms that other scientists can use.
Benefits Over “Normal” Computers
Brains excel at handling uncertain information and learning from minimal examples, something current AI systems struggle to replicate efficiently. Silicon-based chips are powerful but energy-hungry, while neurons operate using chemical and electrical signalling at extremely low energy costs.
Wetware computing could, therefore, one day make certain types of AI and modelling tasks far cheaper and more sustainable to run. For example, the technology could also improve medical research by allowing scientists to study disease or drug effects on human cells without animal testing. Johns Hopkins researchers have said that organoid computing could “advance disease modelling and reduce animal use” alongside powering future AI systems.
Competitors, The Industry and Users
For developers like FinalSpark, the short-term business model is basically “research as a service”. This means that universities can log into FinalSpark’s Neuroplatform to access organoids remotely, run experiments and collect data without needing their own biocomputing facilities. The company says its neurons are already shared across nine universities and accessed 24 hours a day.
For competitors, the emergence of wetware is another pressure point in the race for energy-efficient computing. Chipmakers such as Intel and Nvidia are already developing neuromorphic processors that mimic brain structures, while wetware takes that concept further by using real neurons. Although biological computers are not ready to replace silicon, their development highlights how efficiency, adaptability and sustainability are becoming strategic priorities in computing.
For businesses, the most immediate relevance is energy and research access. For example, if wetware systems can eventually handle niche AI workloads or data modelling at a fraction of the power, that could transform data centre economics. Remote access models like FinalSpark’s also point to new ways of conducting research collaborations, where biological experiments are run digitally across borders.
Investors, regulators and policymakers are also likely to be watching the whole wetware idea closely. It’s worth noting that the Baltimore Declaration provides early guidance on consent, provenance, transparency and the monitoring of any potential signs of sentience, giving regulators a starting framework as the technology moves closer to commercial use.
Challenges and Criticisms
Given the unique nature and newness of this type of development, there are, of course, plenty of challenges ahead. Scaling actually remains the greatest technical challenge at the moment. For example, without blood vessels or advanced support systems, organoids struggle to survive long enough or grow large enough to carry out any complex computations. Their behaviour can also vary as the living tissue changes over time, making reproducibility difficult. Researchers are experimenting with microfluidics and electrode “caps” that can wrap around 3D organoids to improve stability and signal capture.
The ethical debate is an obvious (and equally active) one in this case. The Baltimore Declaration warns researchers to be alert to any sign of emerging consciousness and to treat wetware experiments with the same care given to animal studies. Scientists stress that today’s organoids are non-sentient, but agree that as complexity increases, ethical oversight must keep pace.
Also, given how exciting and futuristic the idea sounds, expectations need managing. For example, although Pong-playing neurons and robotic demonstrations are valuable proofs of concept, they are not evidence of general intelligence. Turning these small experiments into reliable, standardised systems that can be trained, paused and restarted like software will take years. Even supporters of the field acknowledge that it remains in its infancy, with commercial value likely to emerge only once lifespans, interfaces and quality controls improve significantly. “Organoids do not have blood vessels… this is the biggest ongoing challenge,” said Professor Simon Schultz of Imperial College London, highlighting the biological limits that must be overcome before wetware computing can scale.
Cortical Labs’ researchers have said that their neurons could learn to play Pong in minutes, showing adaptive behaviour but also underlining how early the technology remains. Johns Hopkins scientists maintain that wetware “should complement, not replace, silicon AI”, a sentiment echoed across most of the research community.
FinalSpark, Cortical Labs, Johns Hopkins University and Chinese teams behind MetaBOC are currently the main players to watch. Each is pursuing different goals, from remote-access research platforms to robotic control systems, but together they are actually defining what may become a new category of living computation, albeit a bit creepy for many people.
What Does This Mean For Your Business?
Biocomputing is now moving from concept to reality, and the idea of machines powered by living cells is no longer confined to science fiction. In laboratories, clusters of human neurons are already showing the ability to learn, respond and adapt, marking a genuine new direction in how computing power might be created and used. The researchers behind this work remain cautious, but their early results suggest that living tissue could soon sit alongside silicon as part of the world’s computing infrastructure.
The potential benefits are clear. For example, energy efficiency has become a pressing issue for every industry that depends on artificial intelligence, from cloud computing to data analytics. If biocomputers can perform learning and problem-solving tasks using a fraction of the power consumed by conventional hardware, the impact on cost, sustainability and data centre design could be significant. For UK businesses, this could eventually mean access to more energy-efficient AI systems and new opportunities in research, innovation and green technology investment.
Beyond business efficiency, there are also clear research and healthcare implications. Pharmaceutical and biotech companies could use these systems to model how drugs affect human cells with far greater accuracy, reducing reliance on animal testing. Universities could gain new tools for neuroscience, while technology firms might develop adaptive systems that learn directly from biological responses rather than pre-programmed rules. For investors and policymakers, this blend of biology and computing presents both an opportunity to lead and a responsibility to ensure strict ethical oversight.
However, the barriers are as significant as the promise. For example, keeping organoids alive, stable and reproducible remains difficult, and each culture behaves differently over time. Also, ethical questions are becoming increasingly important too, with scientists and regulators needing to ensure that no experiment risks creating self-awareness or distress in living tissue. Governments will also need to consider how existing AI and data laws apply to systems that are, in part, alive.
For now, biocomputing remains a niche research field, but it is advancing quickly and forcing people to rethink what the word “computer” could mean. Whether it becomes a practical alternative to silicon or stays a scientific tool will depend on how successfully the technical and ethical challenges are managed. What is certain is that the next stage of computing will not just be faster or smaller, but it may also be alive.
Company Check : Google’s App-Builder Expands To 15 More Countries
Google is widening access to Opal, its no-code AI mini-app builder, to 15 additional countries. However, new research warns that AI-accelerated development is outpacing software security.
What Is Opal?
Opal is a Google Labs experiment that turns a plain-English prompt into a working mini web app. Users describe what they want, then Opal assembles a visual workflow of inputs, AI model calls and outputs. Each step can be opened in an editor to review the prompt, adjust the logic, add new steps, or run the workflow step by step to see what happens and where it fails. Once ready, creators can publish the app to the web and share a link so others can use it with their Google accounts.
Google introduced Opal in the United States in late July 2025 as part of its push to make AI creation accessible to non-developers. The stated aim was essentially to let people turn ideas into small, useful tools without writing code, while keeping the workflow visible so it can be inspected and improved. However, when early U.S. adopters built more than just novelty projects, it seemed to nudge Google to move faster on a global rollout.
Where And When?
On 7 October 2025, Google said Opal would begin rolling out to 15 additional countries. These are Canada, India, Japan, South Korea, Vietnam, Indonesia, Brazil, Singapore, Colombia, El Salvador, Costa Rica, Panama, Honduras, Argentina and Pakistan. Google framed the expansion as a response to the sophistication of early user projects. As Megan Li, senior product manager at Google Labs, put it in a Google blog post, “we did not expect the surge of sophisticated, practical and highly creative Opal apps we got instead,” which made it clear the tool needed to reach “more creators globally.”
That said, the rollout remains within Google Labs and the product is still presented as just experimental, which may be helping Google to manage expectations at this point. For example, Google is giving the message that Opal is designed for rapid prototyping, automation and lightweight utilities, but not really for performance-critical systems. In other words, it’s a step toward broader access rather than the final word on enterprise-grade app building.
What Has Improved?
Alongside the expansion, Google announced two upgrades based around improving reliability and speed. The first is advanced debugging that stays no-code. With this, users can run a workflow step by step in the visual editor, or iterate on a single step in a console panel, with errors surfaced exactly where they occur. The second is a faster core, which means that the new Opal is snappier than before, and steps can run in parallel so complex workflows execute more quickly. Google’s hoping that these changes address common blockers for no-code builders, who need immediate context when something breaks and shorter wait times when experimenting.
Why Now?
Opal lowers the barrier to building AI-powered tools and keeps those experiments inside Google’s ecosystem. It also means that Google gets to learn what people are trying to build, where they get stuck, and which patterns succeed. That feedback loop improves the underlying models, the templates and the product itself. It also positions Google in a growing market where rivals are courting non-technical creators with prompt-to-app tools. Canva has expanded its Magic features, Figma has explored AI-assisted interface creation, and Replit has continued to blur lines between coding assistance and app scaffolding. By scaling Opal, Google can meet users where they are and channel more of that experimentation through its models and accounts.
Users And Teams
For individuals, Opal basically shortens the path from idea to working prototype. To give a few simple examples of how it could be used:
– A marketer could create a content repurposer that uses a brief to output social copy with a few review steps.
– A customer support manager could build a simple intake tool that classifies enquiries and drafts suggested replies for human approval.
– An analyst could chain together a data cleaning step, a summariser and a report generator without waiting for a development slot.
For teams inside businesses, the visual workflow aspect of Opal also matters. For example, people can see the logic, the prompts and the hand-offs between steps, which helps with training, peer review and handover when roles change. Publishing a mini-app as a link also makes internal distribution straightforward. That convenience is precisely why governance needs attention. Without some oversight, useful experiments can turn into shadow tools that handle customer data or trigger actions outside approved processes.
Businesses
Although the latest expansion does not include the UK, many UK organisations may be watching for signals about localisation, policy controls and Workspace integration. The draw is clear, i.e. faster prototypes mean faster experiments with customer journeys, marketing operations, knowledge management and lightweight analytics. The risk is also equally clear – if non-developers can publish AI-driven tools that interact with real data, then security, data protection and auditability have to be part of the pattern from day one. UK firms with regulated obligations will need to map Opal to existing controls for data handling, retention, identity and access, and change management.
Google’s Competitors
Following this wider rollout, Google’s competitors may now see three pressures intensify, which are:
1. Speed to value. Lower latency and parallel steps raise expectations for interactive build-iterate loops.
2. Visibility and trust. Step-level error context tackles a common barrier to adoption, which is uncertainty about what the AI system is doing and why it failed.
3. Distribution. Google can seed Opal where many small tools begin, inside consumer accounts and Workspace environments, which increases the chance of viral internal adoption once the feature reaches more markets.
Challenges And Criticisms
No-code tools typically hit a ceiling with bespoke integrations, complex data governance and strict performance requirements. With this in mind, Opal is unlikely to replace traditional engineering for systems of record or heavily regulated workflows. There are also questions about lock-in, portability and transparency. For example, if a mini-app depends on specific Google prompts or components, migrating it to another platform may not be trivial. Observers of AI-assisted creation have also raised concerns about inconsistency and security weaknesses in generated logic. Even with step-wise debugging, a workflow that calls external models can behave unpredictably across inputs, which complicates testing and assurance.
‘Security Debt’ In An AI-Accelerated World
This broader concern is reflected in new research from Black Duck, a long-running application security and open source risk specialist now part of Synopsys. For example, in a recent survey of 1,000 security professionals, 81 per cent said application security testing is slowing development and delivery, nearly 60 per cent said their organisations deploy code daily or more, and 46 per cent still rely on manual steps for security. The report warns that these patterns are creating “security debt”, with vulnerabilities left unaddressed as release velocity increases. It also highlights tool sprawl and alert fatigue, with 71 per cent of respondents complaining about noisy and duplicative alerts, and it notes that 61.64 per cent of organisations test less than 60 per cent of their applications. Veracode’s separate analysis adds context, estimating that average remediation times have increased from 171 days in 2020 to 252 days in 2025.
Black Duck’s CEO, Jason Schmitt, has previously said that the findings show traditional approaches to application security are no longer keeping pace with the speed of modern software delivery. The company has advised development teams to move towards integrated, automated security processes that sit directly within their everyday workflows, rather than relying on separate or reactive testing later in the cycle.
How This Connects To Opal And Similar Tools
Opal’s improvements in transparency and debugging make failures easier to spot and discuss, however they do not replace secure design, testing and monitoring. If AI lowers the threshold for building and sharing tools, more people will build more tools, which increases the importance of guardrails that operate at the point of creation. For organisations experimenting with Opal, that means deciding where such tools are permitted, classifying data types that may flow through them, setting standards for prompts and outputs, and ensuring that automated checks run when a mini-app is created and whenever dependencies change.
Black Duck’s long-running audits of commercial codebases routinely find vulnerable open source components and out-of-date dependencies. Even small utilities can pull in libraries, connect to services or incorporate code fragments that carry risk. The lesson here is not to block experimentation. It is also to bring security to where the experimentation happens, to reduce manual steps, and to ensure there is visibility of what has been published, by whom, and with what data access.
What To Watch
For Google, the next test is how Opal scales beyond early adopters, including whether it gains the policy controls and enterprise integrations that larger organisations expect. For rivals, it seems the bar for speed, transparency and distribution has now been nudged a bit higher. For business users, especially in the UK, there is an opportunity to prototype faster while maintaining a clear line of sight on data and permissions (when it rolls out here). For security leaders, the priority is to embed checks in the same workflows that tools like Opal enable, so that velocity and visibility move together rather than in conflict.
What Does This Mean For Your Business?
For Google, the global expansion of Opal signals a growing confidence in the idea that AI-powered app creation can be simplified without losing too much control. It also highlights a clear ambition to (thankfully for many) make natural language the next user interface for software development, i.e. simplifying and democratising software development. Whether that ambition holds depends on how effectively Google can balance accessibility with governance. As more non-developers begin building tools that act on live data, the risk of inconsistency, bias and poor security hygiene rises. That is where the lessons from Black Duck’s research become most relevant. The warning is that automation alone does not guarantee safety, and that speed without embedded checks will always carry hidden costs.
For UK businesses, the wider rollout offers some cautious hope. It shows what could soon be possible for teams wanting to automate small tasks or prototype new ideas without waiting for developer time. However, it’s also a reminder that security, compliance and auditability can’t really be left behind. Firms that already use low-code or AI-assisted systems may now want to consider how no-code tools like Opal might fit within existing frameworks for risk management and data governance. The most successful adopters will likely be those that integrate these tools responsibly, pairing innovation with oversight.
Other stakeholders will also be watching closely – regulators may seek to understand how accountability works when the code itself is generated, while competitors will be under pressure to match Google’s blend of speed and transparency. For users, the immediate value lies in creativity and productivity, but long-term trust will depend on how reliably these mini-apps behave and how safely they handle information. What Opal ultimately represents is a shift towards a more participatory form of software creation, where almost anyone can build, test and share ideas. The challenge now is ensuring that such openness develops alongside the same rigour that businesses and users expect from any other form of technology.
Security Stop-Press: Fake VPN App Drains Bank Accounts Across Europe
A fake Android VPN app has been caught stealing users’ money by giving hackers full control of their phones.
Researchers discovered the malicious app, Modpro IP TV + VPN (also known as Mobdro Pro IP TV + VPN), spreading through unofficial websites. Once installed, it drops a banking trojan called Klopatra, which has infected over 3,000 devices in Spain and Italy.
Klopatra exploits Android’s Accessibility Services to read screens, capture logins, and move money while users sleep. Apparently, it uses “Hidden VNC” to hide its actions and has evolved through more than 40 versions since March 2025, linked to a Turkish-speaking criminal group.
Experts warn that free VPN and IPTV apps can hide malware or weak privacy controls. Users who sideload apps, i.e. install them outside the Play Store, risk bypassing Google’s protections.
Businesses should block sideloaded apps, keep Android devices updated, and train staff to recognise risky downloads. Also, strong permission policies and mobile security tools remain key to stopping such attacks.
Sustainability-In-Tech : Concrete “Battery” Now Stores 10 Times More Energy
MIT scientists say a new carbon-cement “concrete battery” has advanced dramatically, now storing ten times the energy it did just two years ago.
The Breakthrough Explained
The innovation comes from researchers at the Massachusetts Institute of Technology (MIT), who have been working on what they call electron-conducting carbon concrete, or ec³. This new type of concrete blends cement, water, ultra-fine carbon black, and electrolytes to create a conductive network within the structure. That network allows the concrete itself to store and release energy like a supercapacitor, effectively turning ordinary building materials into energy storage units.
Tenfold Increase In Energy Storage
Their new study, published in the journal Proceedings of the National Academy of Sciences, shows a tenfold increase in energy storage compared with earlier versions. The researchers say they achieved this by refining the electrolyte composition and altering how it is introduced during mixing. The result is a more efficient, denser electrical nanostructure that significantly boosts storage capacity.
Who Developed It and Why?
The work has been led by what’s known as MIT’s Electron-Conducting Carbon-Cement (ec³) Hub and the MIT Concrete Sustainability Hub. Key researchers are reported to include Associate Professor Admir Masic and research scientist Damian Stefaniuk, supported by a multidisciplinary team of engineers and materials scientists.
According to Masic, the vision behind ec³ is not simply to create another battery alternative, but to rethink how existing materials can help solve global energy challenges. For example, as Masic said in MIT’s announcement about the breakthrough, “Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?”.
Embedding Storage Into Construction Materials
The motivation to create the battery lies in the growing global need for affordable, sustainable energy storage. For example, as renewable energy sources like solar and wind expand, there remains the problem of what to do when generation stops, e.g. at night, or when the wind is calm. Embedding storage directly into construction materials, therefore, could be a way to help solve this, removing the need for separate battery systems that rely on scarce materials such as lithium and cobalt.
How It Works
Technically speaking, the material that it’s made from functions as a structural supercapacitor rather than a chemical battery. Supercapacitors store energy electrostatically, which means they can charge and discharge rapidly and endure far more cycles than traditional batteries. The carbon black particles form a continuous conductive web throughout the hardened cement, and the electrolyte fills the pores, allowing ions to flow and charge to build up across the internal surfaces.
Used Microscopy Techniques To Design It
This nanonetwork (the tiny, interconnected structure that carries electrical charge) was designed using advanced microscopy techniques at MIT, i.e., powerful imaging tools were used to see materials at the nanoscale. This then revealed a fractal-like (branch-like) web pattern surrounding the pores. Understanding this structure helped the team identify how to adjust the electrolyte to improve charge flow. The team then switched from soaking the concrete in the electrolyte after it hardened to mixing it directly in from the start, ensuring uniform distribution and better conductivity.
Tenfold Improvement in Power
In the team’s 2023 version, about 45 cubic metres of ec³ concrete were needed to store enough energy to power a typical household for one day. However, the new version needs only around five cubic metres, which is the equivalent volume of a single basement wall.
The improved material can now store more than 2 kilowatt-hours per cubic metre, meaning a cubic block the size of a large household refrigerator can power an actual fridge for a day. This level of storage density, while still lower than lithium-ion batteries, represents a major step towards practical, large-scale use.
Built An Arch From It
The MIT team also demonstrated how the technology could function structurally and electrically at the same time by building a small arch made of ec³. The arch supported its own weight while powering an LED light. Interestingly, when weight was added, the LED flickered, suggesting that such structures could also act as self-monitoring sensors, detecting stress or damage in real time.
Potential Uses and Real-World Applications
The most immediate possible uses for this material could include homes and buildings with integrated solar power systems. For example, instead of relying on external battery packs, the building’s own walls or floors could store excess energy for later use.
Beyond buildings, the team envisions roads and car parks capable of charging electric vehicles, pavements that can heat themselves in icy weather, and bridge structures that both bear loads and store renewable energy. In Japan, for example, ec³ slabs have already been used to heat pavements in Sapporo, suggesting possible future roles in cold-climate infrastructure.
As co-author of the research report, James Weaver explained, “By combining modern nanoscience with an ancient building block of civilisation, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.”
Long-Term Cost and Energy Savings
For developers and facility managers, this technology could offer long-term cost and energy savings. Buildings made from ec³ materials might one day store solar power onsite without additional space or equipment. Large commercial facilities could reduce their reliance on grid energy, avoiding peak-time tariffs.
Manufacturers and contractors may also find new business opportunities in producing and deploying ec³ at scale. If production methods prove cost-effective, this could redefine energy infrastructure for corporate campuses, logistics centres, and industrial sites. The ability to integrate energy storage invisibly into standard construction materials could lower project complexity and improve sustainability credentials for companies focused on ESG goals.
Battery Makers
For now, ec³ is not positioned to replace high-performance lithium-ion batteries. This is because its energy density is still much lower, making it unsuitable for mobile devices or vehicles. However, its potential lies in stationary storage, particularly where space and material costs are already accounted for.
That said, battery firms could see it as complementary rather than competitive, e.g., part of hybrid systems that combine concrete supercapacitors for daily cycling with conventional batteries for bulk storage. The concept challenges the assumption that batteries must always be separate physical units, hinting at a future where storage is embedded in the fabric of our cities.
Environmental and Sustainability Factors
It shouldn’t be forgotten here that cement production is responsible for around 7 to 8 per cent of global CO₂ emissions. For ec³ to be genuinely sustainable, therefore, its energy benefits must outweigh the embodied carbon from cement and the added materials such as carbon black and electrolytes. The MIT researchers argue that multifunctional materials can deliver a net reduction by serving multiple roles, i.e. structural, electrical, and possibly even carbon-sequestering.
There is also the issue of durability – concrete structures often last decades, so the embedded energy system must remain stable over similar timeframes. The MIT team is currently studying how environmental conditions such as moisture, temperature, and mechanical stress affect performance over time.
Not Alone
MIT is not alone in exploring energy-storing construction materials. For example, researchers at Chalmers University of Technology in Sweden have developed a rechargeable cement-based battery using metal electrodes and carbon fibre layers, though its energy density is far lower than ec³’s latest version. Also, a team at Washington University in St. Louis has demonstrated “energy-storing bricks” that use a conductive polymer coating to create supercapacitor-like properties.
These parallel projects point to a wider movement towards multifunctional building materials that blur the line between structure and infrastructure. However, MIT’s progress in scaling up storage capacity and integrating the technology into load-bearing concrete sets it apart.
Challenges and Criticisms
Despite the excitement, experts point to several hurdles that the MIT team still need to address. The material’s energy density remains modest compared with lithium-ion batteries, meaning large volumes are required to store meaningful amounts of power. The use of organic electrolytes such as acetonitrile also raises safety and flammability concerns, especially in residential settings.
Cost and manufacturing complexity are further issues. Producing carbon-rich, electrolyte-infused concrete at commercial scale will demand new supply chains, mixing standards, and quality controls. The economic viability depends on achieving costs comparable to conventional concrete, something that remains uncertain.
Critics also note that while supercapacitors excel at rapid charging and long life, they generally suffer from self-discharge and limited total storage time. The MIT team will need to demonstrate consistent long-term performance before industry adoption can begin.
For now, the research remains at laboratory and small-prototype scale, but the tenfold leap in capacity is a meaningful milestone. If the next steps confirm durability, cost efficiency, and safety, the humble concrete block could become one of the most unexpected innovations in sustainable energy to date.
What Does This Mean For Your Organisation?
If proven reliable and scalable, this breakthrough could reshape how the built environment contributes to global sustainability targets. Embedding energy storage directly into the concrete of homes, offices, and transport infrastructure would mean that the same materials already used in construction could also support renewable energy systems, lowering costs and improving resilience. The practical implications extend far beyond academia, giving architects, engineers, and developers a new tool to design buildings that generate, store, and use power autonomously.
For UK businesses, the potential lies in efficiency and reputation. Construction firms and materials suppliers could benefit from being early adopters of multifunctional concretes that reduce carbon impact and add operational value. Facilities managers could also gain from a future where energy-storing walls or car parks reduce dependence on grid supply and shield companies from fluctuating electricity prices. As sustainability reporting becomes a central requirement for both investors and regulators, technologies like ec³ could offer measurable advantages in meeting ESG and net zero commitments.
Governments and regulators are likely to be very interested in this energy storage idea. For example, the possibility of embedding large-scale energy storage into existing infrastructure aligns well with national energy transition goals, but it also raises questions about building codes, safety standards, and lifecycle performance. Clear regulation and industrial partnerships would be needed before ec³ can move from prototype to construction site. Battery manufacturers, meanwhile, will need to assess whether to compete or collaborate. For many, hybrid systems combining traditional battery units with concrete-based supercapacitors could prove to be the most viable commercial path.
From a sustainability standpoint, the real test will come when energy gains are balanced against embodied carbon costs. Cement’s emissions footprint remains substantial, and researchers must demonstrate that the functional value of ec³ outweighs that environmental cost. Even so, the concept of a building material that can both support and store power captures a rare intersection of practicality and vision. If MIT’s concrete battery continues to perform as projected, it could help redefine how energy storage, architecture, and sustainability intersect in the decades ahead.
Video Update : A Simple Way To Create Better Prompts
Get back to basics with this simple six point formula for creating effective prompts. This video tutorial gives a simple, usable framework to help you get the most out of your AI.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]