Tech Insight : What’s All The Fuss About Telegram?

CEO and founder of messaging app Telegram, Pavel Durov, was recently arrested in France over allegations that Telegram facilitates illegal activities, including money laundering, drug trafficking, and the distribution of child sexual abuse material.

Pavel Durov

Pavel Durov is a 39-year-old Russian-born billionaire tech entrepreneur, known for founding the social networking site VKontakte (VK) and later (2013), the messaging app Telegram. Durov gained a reputation for his commitment to freedom of speech and user privacy. After being ousted from VK due to conflicts with Russian authorities, he focused on Telegram, which is known for its strong encryption and privacy features, making it a preferred platform for those who value security. Durov, who holds citizenship in several countries, including France and the UAE, claims that Telegram has 950 million monthly active users. Also, Telegram has recently seen an increase in downloads, propelling it to no. 2 position on the U.S. App Store’s Social Networking charts (and boosting its global iOS downloads).

Somewhat bizarrely, Durov also made the news in June following claims he made on Telegram about having fathered over 100 biological children following a donation of sperm at a clinic 15 years ago to help a friend have a baby.

Arrest 

On August 24, Mr Durov was arrested by French authorities (in an extended detention until August 28), then released on a €5.6 million bail but remains under judicial supervision (reporting to the police regularly and remaining within French territory). The Paris-based prosecutors stated that Pavel Durov is under formal investigation for 12 alleged offences, including:

– Assisting in the management of an online platform that facilitates illegal transactions by organised crime.

– Failing to cooperate with authorities (by not providing necessary information or documents).

– Being complicit in the possession and distribution of child pornography.

– Involvement in drug-related activities, e.g. acquiring, transporting, and selling narcotics.

– Participating in organised fraud.

– Money laundering (related to proceeds from organised crime).

– Using unauthorised cryptographic tools and providing encryption services without proper certification.

– Involvement in activities that could damage automated data processing systems.

In the French legal system, being under formal investigation means that there is sufficient evidence to warrant further inquiry, but it does not imply guilt or necessarily lead to a trial.

Telegram – Like Having The Dark Web In Your Pocket 

One indication of how much Telegram may currently used by those involved in criminality and perhaps how there is room for better moderation comes in the form of criticism by podcaster Patrick Gray who, for months, has been describing Telegram as “the dark web in your pocket”. In fact, a recent BBC investigation (by cyber correspondent Joe Tidy), highlighted how Telegram users can (without their consent) become added to many different active illegal groups. These included a ‘Card Swipers group’ apparently selling stolen cloned credit cards (shipping worldwide), the ‘Drugs Gardens official’ group selling marijuana and illegal vapes, and a number of other groups where it seems members can buy fake vouchers, gift cards, passports, driving licences, prescription drugs, malicious software, guns, and more!

The report of the BBC’s investigation also included an allegation by Brian Fishman, co-founder of the Cinder software platform, that Telegram “has been the key hub for Isis for a decade” and that “it’s ignored reasonable law enforcement engagement for years”. 

Why Now? 

The timing of the French government’s actions against Pavel Durov and Telegram can be attributed to a combination of legal, political, and contextual factors, including:

– Recent legal and regulatory developments in France and the EU, placing increased emphasis on the responsibility of digital platforms to prevent illegal activities. Telegram’s strong encryption and large group capabilities, which allow up to 200,000 members per group have raised concerns about the platform being used for illegal activities such as money laundering, drug trafficking, plus the distribution of child sexual abuse material.

– Telegram’s commitment to user privacy and its resistance to moderating content or cooperating with law enforcement have made it a focal point for governments concerned about security, i.e. a political matter. Durov’s advocacy for free speech, along with his previous comments in interviews indicate that he would refuse certain requests from authorities to remove content from Telegram, plus the platform’s reputation as a haven for privacy (and end-to end to end encryption making it closed to government scrutiny or control) have made it a target for authorities. As with the UK for WhatsApp, the French government would dearly love to gain some kind of back door access and/or be able to exert control over digital communications like Telegram, especially considering broader geopolitical tensions and concerns about government overreach.

– It’s also been suggested by Russian politician Vyacheslav Volodin that the US may be behind Durov’s arrest due to the fact that Telegram is widely used in Russia and Ukraine and is one of the few large internet platforms that the US has no influence over.

– Telegram’s features mean it is a platform where disinformation, extremist content, and other illegal activities can thrive. The app’s weaker moderation policies (compared to other platforms) appear to have led to its usage by far-right groups and other extremists, which, not surprisingly, has drawn the attention of authorities. Incidents such as the use of Telegram to organise the recent violent disorder in UK cities have heightened scrutiny, pushing governments to take action against platforms that they believe facilitate such behaviour.

These factors combined have therefore culminated in a situation that has tipped the balance and made the French authorities act.

What Does Durov Say? 

David-Olivier Kaminski, Durov’s lawyer, has stated that Telegram fully adheres to European digital laws and maintains content moderation standards comparable to other social media platforms. He has also argued that it is “ridiculous” to claim that Durov is connected to any “criminal activities that do not relate to him, either directly or indirectly.” 

Support From Elon Musk 

Durov has received support from another tech billionaire, Elon Musk, who expressed support for Durov following his arrest, thereby highlighting the growing concern among some tech leaders regarding issues of privacy and freedom of speech. Musk, known for his own advocacy of free speech and minimal regulation on digital platforms, appears to view Durov’s legal troubles as part of a broader struggle against governmental overreach into digital communications. Musk’s support could, therefore, be seen as part of a larger narrative involving tech (billionaire) entrepreneurs defending against what they perceive as unjust government actions against certain platforms.

Other Perspectives 

Several prominent figures have defended Durov following his arrest, highlighting concerns about privacy and free speech. In addition to Elon Musk, who argued that “moderation” is often just another term for censorship and called for Durov’s release, Chris Pavlovski, founder of the free-speech-oriented platform Rumble, has also voiced his concern.  Pavlovski has noted that Durov’s detention influenced his decision to leave Europe, which reflects broader fears among tech entrepreneurs about government overreach.

Edward Snowden, the famous whistleblower (now living in Russia), also condemned Durov’s arrest as an assault on basic human rights, accusing French authorities of trying to gain access to private communications under the guise of security. These reactions highlight a broader anxiety among privacy advocates about government efforts to restrict encrypted communication platforms like Telegram, viewing such actions as threats to freedom of speech and digital autonomy.

What Does This Mean For Your Business? 

The arrest of Pavel Durov and the scrutiny of Telegram may signify a turning point in the debate over digital privacy and security. For businesses, this means a closer examination of the platforms they use for communication and the potential risks associated with them. Companies must consider the legal and ethical implications of using platforms that could come under government scrutiny for allegedly enabling illegal activities. Businesses need to ensure they are compliant with local regulations and are prepared to adapt to changing laws that might impact how they use these digital communication tools.

For platforms like Telegram, WhatsApp, and other similar messaging services, this incident highlights the challenges of balancing user privacy with regulatory compliance. These platforms, known for their strong encryption and large user bases, will likely face increased pressure to cooperate with law enforcement agencies or risk similar legal challenges. The scrutiny suggests that governments worldwide are now very keen to regulate digital communications more tightly, potentially forcing platforms to reconsider their privacy policies and moderation practices.

This broader trend reflects a global effort by governments to control speech and access to information under the pretext of fighting illegal activities. For platforms, it presents a double-edged sword. For example, while strong security features are essential for protecting dissidents and activists, these same features are viewed as obstructing law enforcement. Platforms must, therefore, find ways to navigate this complex environment and attain a kind of balance (which is likely to fluctuate) between safeguarding user privacy and meeting regulatory expectations to avoid legal repercussions.

For secure messaging platforms, it may now be a case of evaluating their current moderation practices and security features to ensure they can address both user privacy concerns and regulatory requirements. Platforms like Telegram may now have to engage more with policymakers (or at least appear to) and develop strategies that protect users while complying with laws designed to prevent illegal activities. Striking the right balance will be crucial in maintaining user trust and avoiding legal challenges, which could significantly impact their operations and reputation.

Tech News : Dublin Says No To New Google Data-Centre

South Dublin County Council has refused Google Ireland planning permission for a new data-centre at Grange Castle Business Park in South Dublin.

Why? 

The reason given for the refusal was “the existing insufficient capacity in the electricity network (grid) and the lack of significant on-site renewable energy to power the data centre”. 

What Data Centre? 

Google already has two data-centres in South Dublin’s Grange Castle business park, and had submitted plans to build a third 72,400sq-metre data-centre consisting of eight data halls on a 50-acre site. This new data-centre would have created 50 jobs and documents lodged with the application by Google Ireland had highlighted how important the new data-centre would be for Google being able to meet the increasing demands for Information and Communications Technology (ICT) services to its customers in Ireland and in supporting Ireland’s digital economy.

Concerns 

South Dublin County Council refused Google Ireland’s planning application for a third data-centre primarily due to concerns about energy usage and environmental impact.

The council was worried that the new data-centre would place a significant strain on the already limited capacity of the local electricity grid, potentially leading to grid congestion and challenges in managing power supply. This decision aligns with the stance of Ireland’s state-run electric power operator, EirGrid, which had previously (2022) indicated it would not accept applications for new data-centres in Dublin in the near future, due to insufficient grid capacity.

The council also criticised the lack of on-site renewable energy sources in Google’s proposal, which was seen as inconsistent with Ireland’s climate goals and efforts to reduce carbon emissions.

A lack of clarity regarding Google’s engagement with power purchase agreements and its failure to connect the proposed data-centre to the surrounding district heating network was also highlighted as contributing to the planning refusal.

In relation to concerns about the environmental impact, Google Ireland’s proposed design was deemed to not fully comply with the South Dublin County Development Plan (2022-28), particularly in terms of protecting green infrastructure, such as streams and hedgerows, and the overall integration of the facility into the local environment. The council ruled that the proposed usage was not suitable for the designated enterprise and employment-zoned lands, and it highlighted concerns about how the project would impact power supply once operational in 2027.

Also, An Taisce (an environmental advocacy group) further warned that the data-centre would compromise Ireland’s ability to meet its carbon budget limits and place additional pressure on renewable energy resources, leading to further environmental concerns.

What Now? 

As yet, there’s been no official comment from Google about the refused application and Google now has one month to appeal the Council’s decision.

With part of the refusal of the application being based on the apparent lack of on-site renewable energy, it’s interesting to note that Google has long been involved with renewable energy sources. For example, back in 2021, Google signed a long-term supply agreement with solar energy firm Energix Renewables to supply Google with electricity via its solar power operations, covering a 1.5GW peak of solar project development until last year.

Data-Centres 

Just how much of an effect data-centres are having on Ireland’s energy supplies was highlighted by a Silicon Republic report back in June which showed that data -entres now consume a massive 21 per cent of Ireland’s electricity!  With this figure set to rise to one-third of the country’s total electricity consumption by 2026, it’s perhaps unsurprising that political leaders, environmental bodies and councils are now particularly concerned about the effects of more massive data-centres being built in Ireland.

That said, Amazon was granted permission last September for three new data-centres at a data campus near Mulhuddart, northwest of Dublin. As part of the conditions for granting planning, however, Amazon had to install the infrastructure to develop a district heating scheme for recycling the heat from the data-centres.

What Does This Mean For Your Business? 

The refusal of Google’s planning application for a third data-centre in South Dublin highlights the growing challenges related to the energy consumption of data-centres and their environmental impact. As mentioned above, data-centres already consume a significant portion of Ireland’s electricity (currently 21 per cent), with projections indicating this could rise to a third by 2026. This heavy demand places immense pressure on the national grid, which has (not surprisingly) prompted concerns from local councils, environmental groups, and state energy operators like EirGrid. For businesses, the decision by South Dublin Council illustrates the increasing importance of considering energy efficiency and sustainability in operational plans. Companies will need to explore alternative energy solutions, such as integrating renewable energy sources, to avoid similar setbacks.

For Ireland, this decision reflects a broader commitment to sustainability and managing environmental impacts in line with its national climate goals. By setting a precedent for stricter energy consumption and environmental guidelines, the refusal indicates that future data-centre developments will be closely scrutinised. This could influence other tech giants and data-reliant businesses, prompting them to reassess their environmental strategies and engagement with local communities.

Google may now face a bit of an unexpected challenge in adapting its expansion plans to meet these new expectations. The company must demonstrate a stronger commitment to sustainable energy practices and compliance with local development regulations. That said, Amazon’s been able to get approval for three data-centres by focusing more on renewable energy and giving the community back something via a heat recycling scheme, so despite this initial refusal, it’s not unlikely that Google could still get approval on appeal and with the right alterations to its initial plans.

For the local community, this refusal could be seen as a step towards ensuring that large-scale developments don’t compromise the quality of life, local infrastructure, and environmental health. That said, the refusal also means that 50 jobs won’t be created, and Ireland’s digital economy and ICT may not be as well supported as it could have been, thereby missing out on the economic benefits.

For businesses in Ireland, this case serves as a warning and an opportunity. It signals a shift towards a need for sustainable growth and the need to align business operations with both local and national environmental standards.

Tech News : Warning Against Giving Smartphones To Under 11’s

Mobile network provider EE has launched a smartphone age guidance initiative in which it advises that children under 11 should only use non-smart devices with limited capabilities.

Children With Smartphones 

EE says its new initiative is in response to concerns about children’s online safety and the impact of device usage on their well-being. Back in February, for example, an Ofcom study revealed that almost a quarter of UK five-to-seven-year-olds have their own smartphone. The study also showed that nearly two in five are using messaging service WhatsApp, despite the minimum age limit being 13 while over half of children under 13 use social media. In fact, the study showed that three-quarters of social media users aged between 8 and 17 have their own account or profile on at least one of the large platforms and many children in the study also said they simply lie to gain access to new apps and services. Worryingly, almost three-quarters of teenagers between ages 13 and 17 have encountered one or more potential harms online.

What’s The Problem? 

The use of smartphones and social media by young children poses significant risks, including exposure to inappropriate content such as violence and explicit material, as well as cyberbullying. Privacy concerns also arise, as children may inadvertently share personal information or be subject to data collection by apps. Also, the extensive use of screens can lead to mental health issues, such as anxiety, depression, and sleep disruption, while also potentially contributing to screen time addiction and reduced attention spans.

As EE says, parental concerns are growing as children increasingly use devices at a young age, often bypassing age restrictions to access social media. These concerns centre on the content that children are exposed to, the amount of time they spend on devices, and the potential impact on their mental wellbeing and social development. In fact, children themselves report mixed experiences with social media, indicating that while it offers social connection, it can also lead to stress, anxiety, and feelings of inadequacy.

In the UK (back in March), a Parentkind survey showed that 58 per cent of parents would support the idea of introducing a ban on smartphones for under 16s (77 per cent among parents of primary school children). The survey also showed that 83 per cent of parents said that they felt smartphones are potentially harmful to young people.

EE’s Initiative Targeting Under 16s

In response to parental concerns about the above-mentioned issues, EE says its initiative is targeting under-16-year-olds. The initiative classifies device usage into three groups based on age suitability: under 11s, 11-13, and 13 -16.

Key Recommendations 

EE’s key recommendations for each group as part of the initiative are that:

– Under 11s should only be allowed to use limited capability, non-smart devices, such as feature phones and that parents could ensure they can make texts and calls but restrict access to social media or inappropriate content.

– For children aged 11-13, the advice is that if they use a smartphone, it should have parental controls enabled, as well as a family-sharing app in place such as Google Family Link or Apple Family Sharing, while restricting access to social media.

– For 13-16-year-olds, EE suggests that smartphones are appropriate, but parental controls should be used to manage and restrict children’s access to inappropriate sites, content, and platforms. EE says this age group’s smartphones should allow social media access but should be linked to a parent or guardian account.

Backed By Charity Groups 

EE is keen to stress that this smartphone guidance initiative has the backing of recognised charity groups, including Internet Matters, a leading child safety organisation. For example, Internet Matters CEO Carolyn Bunting said: “This initiative is timely and much needed. Parents and guardians want their children to be able to stay connected with them and to experience the benefits of digital technology, but they are also concerned about online safety and wellbeing. Our recent research showed that parents want to make their own decisions about their children’s use of technology, but that many would value guidance to help them in doing so. It is fantastic that EE is supporting parents with age-specific advice to support children’s diverse technology needs.” 

Part Of A Range Of Measures 

EE also says that the initiative is part of a wider design to promote safe and responsible use of technology among young people, which also includes enhanced in-app (parental) controls, child-friendly products (through a partnership with Verve Connect, to make the ‘Dash+’), and a family online Safety Hub, to be launched later this year.

Mat Sears, Corporate Affairs Director for EE commented: “While technology and connectivity have the power to transform lives, we recognise the growing complexity of smartphones can be challenging for parents and care-givers. They need support, which is why we are launching new guidelines on smartphone usage for under 11s, 11–13-year-olds, and 13 -16-year-olds to help them make the best choices for their children through these formative years.” 

Other Mobile Operators? 

EE is not the only mobile operator to have initiatives to protect their youngest customers. For example, Vodafone has its “Digital Parenting” platform, providing parents with tools and advice on managing children’s online activity, including guidance on setting screen time limits, and understanding online risks. Similarly, O2 has partnered with the NSPCC to offer resources and workshops aimed at educating both parents and children about online safety and responsible use of technology.

Government 

It’s worth noting that back in May, the House of Commons Education Committee asked the UK government to consider a total ban on phones for under-16s. Although this wasn’t supported by the Prime Minister, he did say that the government would be looking again at what content young people can access online. It’s also worth noting that the initiatives by UK mobile operators like EE, Vodafone, and O2 to promote online safety for children are closely aligned with the objectives of the UK’s Online Safety Bill which mandates that companies must take proactive steps to protect users, particularly children.

What Does This Mean For Your Business? 

In today’s digital society, the use of smartphones by children can offer several benefits, such as keeping them connected with family and aiding in their learning and development.

However, the increasing concerns about online safety and the impact of digital device usage on children’s well-being cannot be ignored. EE’s initiative to guide smartphone usage for different age groups, therefore, highlights the growing recognition of these issues and the need for tailored approaches to address them. For EE and other mobile operators, this initiative could, of course, mean an enhanced reputation as a responsible company that is seen to care about the well-being of its youngest users.

By offering practical tools and guidance, these companies can not only mitigate potential risks but also build stronger relationships with parents and guardians who are looking for ways to manage their children’s digital lives more effectively. It may also be a way for mobile operators to stay on the right of legislation and to be seen to be responding positively to government pressure.

For UK parents and guardians, EE’s guidelines may provide a bit more much-needed clarity and support in an area that is becoming increasingly complex. By adhering to age-specific advice and using the recommended parental controls, families may feel better able to navigate the challenges associated with children’s smartphone use and that they are able to do something more to protect their children.

EE’s proactive approach in this initiative could actually help children to enjoy the benefits of digital technology while minimising some of the risks, thereby supporting their mental and emotional development. Also, initiatives like these align with broader legislative efforts, such as the UK’s Online Safety Bill, reinforcing a collective move towards a safer digital environment for young users. For children, this could mean a safer, more structured online experience that could promote positive interactions and healthy digital habits. As these practices become more widespread, they may set a higher standard for how technology can be used responsibly to enhance, rather than harm, young lives.

However, it’s not just mobile operators that need to step up. Social media companies also play a crucial role in making the digital world safer for children and these platforms must take greater responsibility by implementing stricter age verification processes to prevent underage users from accessing content that is not suitable for them. Many believe that social media companies could be doing a lot more to enhance their content moderation systems to swiftly identify and remove harmful content, such as cyberbullying, explicit material, and misinformation. They could also provide better tools and resources for parents and guardians to monitor and control their children’s activities on these platforms, promoting a safer online environment. By collaborating with mobile operators, educators, and policymakers, social media companies could, therefore, be part of a more comprehensive approach to safeguarding children in the digital space.

An Apple Byte : Finally – Mac Apps Can Be Installed on External Drives

In the latest macOS Sequoia developer beta, Apple has introduced a new feature that allows users to install Mac App Store apps directly onto external drives, thereby saving storage space and costs.

This change, though appearing minor, has significant implications for storage management and offers users more flexibility in how they use and manage applications.

Usually, Mac App Store apps are installed in the internal Applications folder, consuming limited internal storage. Now, users can install larger apps (over 1GB) on external drives, which aligns with how third-party apps are managed. This change should help users manage and maximise their internal storage, which is often limited (256GB or even 128GB) and costly to upgrade.

This new feature also allows users to try apps temporarily without adding them to the internal Applications folder. Apps can be easily installed and removed from external drives, making it convenient for those who need to use an app for a specific task or test multiple versions without cluttering internal storage.

While this offers flexibility and space-saving benefits, it may, however, impact performance depending on the speed of the external drive. Also, users will need to keep the external drive connected to access these apps, which might be less convenient for those who use their Macs on the move. Despite these drawbacks, this new option looks like providing a more adaptable way to manage Mac storage and applications.

Security Stop Press : Warning About RansomHub

The FBI, MS-ISAC, and the Department of Health and Human Services (HHS) in the US have issued a released a joint advisory to businesses about the ransomware-as-a-service collective ‘RansomHub’.

The joint advisory highlights how RansomHub (formerly known as Cyclops and Knight) has as established itself as an efficient and successful service model. The advisory highlights how, since its inception in February 2024, RansomHub has encrypted and stolen data from at least 210 victims across various critical infrastructure sectors, including water and wastewater systems.

RansomHub affiliates have been stealing data using a double-extortion strategy, encrypting systems, and stealing data to coerce victims into compliance. The data exfiltration methods vary by affiliate, and the ransom note usually omits initial payment demands or instructions although it typically gives victims between three and 90 days to pay. Instead, it provides a client ID and directs victims to contact the ransomware group via a specific .onion URL, accessible through the Tor browser.

The advice to defenders is to implement the recommendations in the Mitigations section of the advisory, which include installing updates for operating systems, software, and firmware as soon as they are released, using phishing-resistant multi-factor authentication (MFA), such as non-SMS text-based methods, for as many services as possible, and training users to recognise and report phishing attempts.

Sustainability-in-Tech : New Device Could Reduce AI Energy Consumption By 1000 +

Engineering researchers at the US University of Minnesota Twin Cities claim to have demonstrated a state-of-the-art hardware device that could reduce energy consumption for artificial intelligent (AI) computing applications by a factor of at least 1,000!

AI’s Massive Energy Consumption 

The issue the researchers were aiming to tackle is the huge energy consumption of AI, which is only increasing as AI becomes more widespread. For example, the International Energy Agency (IEA) recently issued a global energy use forecast showing that energy consumption for AI is likely to double from 460 terawatt-hours (TWh) in 2022 to 1,000 TWh in 2026. This is roughly equivalent to the electricity consumption of the entire country of Japan!

With the growing demand of an increasing number of AI applications, the researchers have been looking at ways to create a more energy-efficient process, while keeping performance high and costs low.

The New Device – Use The ‘CRAM’ Model 

The new device developed by the University of Minnesota College of Science and Engineering researchers works using a model called computational random-access memory (CRAM). The hardware device which uses the CRAM model is a ‘machine learning inference accelerator’ that is used to speed up the process of running machine learning models, specifically during the inference phase. Inference is the phase where a trained machine learning model makes predictions or decisions based on new, unseen data.

The researchers claim in a recent paper that a CRAM-based machine learning inference accelerator can achieve an improvement on the order of 1,000. Another example showed energy savings of 2,500 and 1,700 times compared to traditional methods.

What Makes IT So Different? 

The difference with the CRAM model is that whereas current AI processes involve a transfer of data between logic (where information is processed within a system, and memory, where the data is stored), the CRAM model performs computations directly within memory cells. Permanently storing data in this computational random-access memory (CRAM) means there’s no need for slow and energy-intensive data transfers to take place, which results in much greater efficiency.

Jian-Ping Wang, the senior author of the research paper about CRAM highlighted how this idea of using memory cells directly for computing “20 years ago was considered crazy” but thanks to the interdisciplinary faculty team built at the University of Minnesota (UMN) they have “demonstrated that this kind of technology is feasible and is ready to be incorporated into technology.” 

Very Flexible Too 

In comments posted on the UMN website, Ulya Karpuzcu, an expert on computing architecture, co-author on the paper has also highlighted another reason why CRAM is a more energy-efficient than traditional building blocks for today’s AI systems. Karpuzcu said: “As an extremely energy-efficient digital based in-memory computing substrate, CRAM is very flexible in that computation can be performed in any location in the memory array. Accordingly, we can reconfigure CRAM to best match the performance needs of a diverse set of AI algorithms.” 

Builds On MTJ Research 

This latest discovery builds on Wang and his team’s previous groundbreaking, patented research into Magnetic Tunnel Junctions (MTJs) devices. These are the nanostructured devices used to improve hard drives, sensors, and other microelectronics systems, including Magnetic Random Access Memory (MRAM), which has been used in embedded systems such as microcontrollers and smart-watches.

Semiconductors 

Following their own successful demonstration of the efficiency boost provided by CRAM-based hardware, the research team is now planning to work with semiconductor industry leaders, including those in Minnesota, to provide large scale demonstrations and produce the hardware to advance AI functionality.

What Does This Mean For Your Business? 

The development of this new CRAM-based machine learning inference accelerator could be a significant breakthrough with far-reaching implications across several industries. For example, for the semiconductor industry, this discovery could bring a new era of innovation. By partnering with the University of Minnesota researchers, semiconductor companies have an opportunity to lead the charge in creating energy-efficient AI hardware, offering a competitive edge in an increasingly sustainability-focused market. The ability to reduce energy consumption by such a vast factor may not only address the growing concern over AI’s carbon footprint but may also align with global initiatives towards greener technologies.

For AI application makers and users, the introduction of CRAM-based technology could revolutionise the way AI systems are designed and deployed. The drastic reduction in energy consumption may allow developers to create more complex and capable AI applications without being constrained by energy costs and efficiency limitations. This could lead to a surge in innovation, as more businesses could afford to implement advanced AI solutions, knowing that their energy requirements and associated costs will be manageable. Users of these AI applications may benefit from faster, more responsive, and more cost-effective services, as the energy savings translate into enhanced performance and lower operational costs.

The energy industry, too, stands to benefit from this technological advancement. With AI’s projected energy consumption doubling within a few years, the shift towards more energy-efficient computing is not just beneficial but essential. By adopting CRAM-based hardware, data-centres and other large-scale AI operators could significantly reduce their energy demands. This reduction may ease the pressure on energy resources and help stabilise energy prices, which is particularly important as demand continues to grow. For data-centre operators, in particular, the promise of lower energy consumption translates directly into reduced operating costs, making them more competitive and sustainable.

Also, this development may support global carbon emission targets, a concern shared by governments, businesses, and consumers alike. By enabling a reduction in energy usage by a factor of 1,000 or more, the adoption of CRAM-based AI technology could substantially cut carbon emissions from data-centres and other heavy users of AI. This would align with the goals of many corporations and nations trying to meet climate commitments and reduce their environmental impact. The widespread implementation of such efficient technology could even become a cornerstone of global efforts to combat climate change, offering a practical and impactful solution to one of the most pressing challenges of our time.

The advent of CRAM-based machine learning inference accelerators, therefore, may not only transform the AI landscape but could also reshape industries and address critical global challenges. By embracing this technology, businesses could achieve greater efficiency and performance as well as contributing to a more sustainable and environmentally friendly future.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives