Security Stop Press : Microsoft Disrupts Major Cybercrime Gateway Service
Microsoft’s Digital Crimes Unit has reported disrupting the activities of major cybercrime-as-a-service provider Storm-1152. Microsoft says Storm-1152 has created for sale approximately 750 million fraudulent Microsoft accounts, earning the group millions of dollars in illicit revenue, and costing Microsoft and other companies even more to combat their criminal activity.
Fraudulent online accounts of the type of Storm-1152 have been creating act as the gateway to many types of cybercrime, including mass phishing, identity theft and fraud, and distributed denial of service (DDoS) attacks.
Microsoft says that its disruption strategy involves obtaining a court order to take websites used by Storm-1152 offline, thereby removing fraudulent Microsoft accounts and the websites used to sell services that can bypass security measures on other well-known technology platforms.
Sustainability-in-Tech : The Battery ‘Domino’ Effect That Could Help Us Hit Climate Goals
A report by the Rocky Mountain Institute highlights how a domino effect of surging battery demand could put global climate goals within reach by enabling a 22 Gigatons per year reduction in CO2 emission.
The Surge in Battery Demand – A Domino Effect
The report suggests that the world is witnessing a shift in energy dynamics due to the exponential growth in battery demand, due to a phenomenon driven by what it describes as a “domino effect” that will cascade from country to country and sector to sector.
The report highlights how this unprecedented battery demand isn’t just a trend and could be a critical enabler in significantly contributing to the abatement of transport and power emissions and (hopefully) the phaseout of half of the global fossil fuel demand. The assertion is that this domino effect of battery demand could be the thing that sets the world on a clear trajectory towards achieving over 60 per cent of the necessary milestones for a zero-carbon energy system.
The S-Curve of Battery Growth
The Rocky Mountain Institute report highlights how, central to understanding this shift, is the S-curve pattern of battery demand. Imagining an ‘S’ (on its side a as a graph illustrating the growth of battery demand), the curve begins slowly, accelerates sharply, then levels off. The report explains that this is because:
– Battery sales have been doubling every two to three years and by 2030, sales are expected to increase by six to eight times, potentially reaching 5.5-8 TWh (terawatt-hours) per year.
– The costs of making each battery will decrease as production increases – for every doubling of production, costs are projected to fall by 19 to 29 per cent.
– As well as cost reduction, battery quality will improve. For example, battery energy density (power stored for their size) is expected to increase by 7 to 18 per cent each time production doubles. By 2030, therefore, top batteries may store as much as 600-800 Wh/kg (watt-hours per kilogram).
– The report highlights that the above effects could mean that by 2030, battery cell costs may have fallen to $32-54 per kWh, making them much more affordable and efficient.
The “Domino Effect” (Across Sectors and Geographies)
The domino effect of battery demand and usage that the report talks about refers to how once new battery technology is successful, it jumps sectors as well as geographies. For example, initially rooted in consumer electronics, battery technology then expanded into motorbikes, buses, and cars. Its current trajectory is towards stationary electricity storage, road haulage, and eventually, short-haul ships and planes by 2030. Geographically, the effect mirrors this sectoral spread. For example, after gaining momentum in early adopter nations, battery technology is now being rapidly adopted in major markets like China, Europe, the United States, Southeast Asia, and India.
The Largest Clean Tech Market Emerges
This explosive growth in battery demand has catalysed the most significant capacity ramp-up since World War II. The race to the top has led to the construction of 400 ‘gigafactories’, capable of producing 9 TWh of batteries annually by 2030!
This development has propelled the battery market to become the largest clean tech market, surpassing combined investments in solar and wind power.
Impact on Fossil Fuel Demand and Climate Goals
If the figures highlighted in the report come to fruition, the implications for fossil fuel demand are, of course, likely to be profound. It could mean, for example, that batteries are poised to replace significant portions of fossil fuel demand in electricity (175 EJ) and road transport (86 EJ), while also challenging the remaining demand in shipping and aviation (23 EJ). If this shift occurs at this scale, it could be pivotal in reducing global emissions by 22 Gigatons of CO2 per year, thereby representing a significant leap towards meeting global energy-related emissions targets.
Challenges and Opportunities Ahead
Despite the promise highlighted in the report, challenges remain. Stressed supply chains and the need for sustainable raw material sourcing are likely to be critical concerns. Also, building the infrastructure for a battery-dominated energy system looks like it’s a monumental task that will require consistent innovation and investment. That said, the ongoing efforts of companies, governments, researchers, and climate advocates, plus the fact that serious progress has to be made in reducing global CO2 emissions (to keep below 1.5°C of warming) are likely to mean that these challenges could be overcome.
It’s Not All Positive
Some of the other major challenges caused by a huge surge in demand for (and production) that the report doesn’t talk much about include :
– The environmental damage from mining. Extracting raw materials like lithium and cobalt can cause habitat destruction, water pollution, and soil erosion.
– Supply chain risks. For example, although the report sees a domino effect of battery adoption across many countries, there is still likely to be a reliance on a few countries for critical materials which raises geopolitical and supply chain concerns, particularly with materials sourced under conditions of environmental or social harm.
– The considerable carbon footprint of battery manufacturing. Battery production is energy-intensive and, if powered by fossil fuels, contributes to carbon emissions.
– Massive recycling and waste management issues. Disposing of (and recycling) rapidly increasing numbers of batteries could pose environmental and health risks due to toxic materials. Current recycling rates are low, and processes can be costly.
– The scarcity of resources. Increased demand for materials like lithium and cobalt could lead to scarcity and higher prices.
– The social and economic impacts of shifts in job markets, particularly in regions dependent on fossil fuel industries, will require new skills and training.
– Transportation hazards from moving large quantities of batteries, e.g. fire and chemical spill hazards.
– Market oversaturation risks. Overproduction could lead to economic challenges in the battery industry.
Mitigation efforts will, therefore, need to include sustainable mining, improved recycling, responsible supply chain management, and development of less environmentally impactful battery technologies – something which is still very much in the research stage.
What Does This Mean For Your Organisation?
The battery revolution outlined in the report could have significant and broad implications for all kinds of businesses and other organisations. This shift presents a unique opportunity for businesses to be at the forefront of a sustainable future. Adopting battery technology could lead to a significant reduction in carbon footprints, offering a pathway to meet environmental goals and adhere to increasingly stringent regulations. Beyond compliance, it may also open avenues for innovation in product development, energy management, and operational efficiency.
This rapidly evolving energy landscape, however, will require organisations to reassess their supply chain strategies and the surge in battery demand implies a need for more robust and sustainable supply networks. Businesses will, therefore, need to ensure a stable supply of materials, potentially reconfiguring sourcing and manufacturing processes to accommodate the growing battery market. This could involve forming new partnerships and investing in technologies that align with the shift towards renewable energy sources.
Also, companies may need to invest in (or partner with) entities for charging infrastructure and energy storage solutions. This investment may not be just a cost but an opportunity to be part of an emerging market that is set to outpace traditional energy sectors.
For organisations in the energy sector, we appear to be at a pivotal moment to move towards clean technologies. The battery market, now overshadowing solar and wind investments, presents new opportunities for growth and innovation. Energy companies could leverage their expertise and resources to lead in battery technology and storage solutions, carving out a significant role in the new energy ecosystem.
This transition to batteries will also bring challenges for workforce skills and knowledge. Organisations will need to invest in training and development to equip their workforce with the necessary skills to navigate the changing technological landscape. This will include an understanding of battery technologies, renewable energy systems, and the accompanying intricacies of new regulatory and market environments.
The change, of course, isn’t likely to be confined to the energy sector alone. Industries like automotive (already with EVs), transportation, and manufacturing are directly impacted and will need to adapt their business models. This might involve transitioning fleets to electric vehicles, rethinking logistics based on battery storage capacities, or redesigning products to be more energy efficient.
Organisations will also have a role to play in shaping policy and public opinion. Collaborative efforts with governments, research institutions, and environmental groups could help in advocating for favourable policies, incentivising renewable energy adoption, and educating the public about the benefits of this transition.
The battery revolution suggested in this report isn’t just a shift in energy preference but a comprehensive change in how businesses will need to operate, innovate, and grow. Being part of a sustainable future will require proactive adaptation, strategic planning, and collaborative efforts. Organisations that embrace this change will not only contribute to a greener planet but also position themselves competitively in a world increasingly driven by clean technology.
Tech Tip – Voice Typing In Windows
If you don’t want to type but need to produce some written content (e.g. dictate something for an email message or an offer) the Windows ‘voice typing’ feature easily and quickly transcribes your speech to text. If you haven’t yet tried it, you may find it fun as well as useful. Here’s how it works:
– Open a suitable Windows app for text, e.g. Word and click into the document where you want the text to start.
– Click on Win + H. If you haven’t yet toggled on ‘Speech recognition’ it will provide a link to Settings enable you to do so. Toggle it to the ‘on’ position.
– Once that’s done, close Settings, return to your Word document and click on Win + H.
– The microphone symbol will appear at the top of the screen. Click on the microphone and wait for the ‘listening’ notification.
– Dictate your text requirements and it will be converted to text in your Word document.
Featured Article : Anti-Trust : OpenAI And Microsoft
Following the recent boardroom power struggle that led to the sacking and reinstatement of OpenAI boss Sam Altman, Microsoft’s relationship with OpenAI is now under US and UK antitrust scrutiny.
What Happened?
A recent boardroom battle at OpenAI (ChatGPT’s creator and working partner of Microsoft), led to the rapid ousting of OpenAI’s boss Sam Altman and resignation of OpenAI’s co-founder Greg Brockman. Both men were reported to have been immediately hired by Microsoft to launch a new advanced AI research team with Altman as CEO. Then, just days later and following the board being replaced (apart from Adam D’Angelo) by a new initial version, Sam Altman returned and was reinstated as OpenAI’s CEO.
What’s The Issue?
The factors that appear to have attracted US and UK regulators over antitrust concerns are:
– Microsoft has long been a significant supporter and backer of OpenAI, investing in the company and also integrating OpenAI’s technologies within Microsoft’s own products and cloud services. This collaboration has helped in scaling OpenAI’s research and the implementation of AI technologies, particularly in areas like large language models, cloud computing, and AI ethics and safety. It could also, however, be a kind of background evidence of a close relationship between the two companies.
– As mentioned earlier, when Sam Altman was ousted, Microsoft reportedly immediately hired him as CEO of a new research team there (further evidence of a very close relationship).
– Microsoft has been granted a non-voting, observer position at OpenAI by a new three-member initial board. This means that Microsoft’s representative can attend OpenAI’s board meetings and access confidential information (but can’t vote on matters including electing or choosing directors). However, it’s not yet been reported who from Microsoft will take the non-voting position and what a final (rather than the initial) OpenAI board would look like.
– More specifically, the main concern of regulators appears to be whether the partnership between OpenAI and Microsoft has resulted in an “acquisition of control”. This is whether one party has material influence, de facto control, or more than 50 per cent of the voting rights over another entity. Such control, for example, could negatively impact market competition. The UK’s Competition and Markets Authority (CMA) is particularly looking into whether there have been changes in the governance of OpenAI and the nature of Microsoft’s influence over its affairs.
– The CMA recently stated that it’s considering whether it is (or may be) the case that Microsoft’s partnership with OpenAI (or any changes thereto) has resulted in the creation of a relevant merger situation under the merger provisions of the Enterprise Act 2002. Also, if so, the CMA has stated that it’s interested in whether the creation of that situation may be expected to result in a substantial lessening of competition within any market or markets in the United Kingdom for goods or services. The CMA has opened an investigation of the partnership between Microsoft and OpenAI which is currently at the comments and information gathering stage which closes on 3 January 2024.
– Although OpenAI’s parent is a non-profit company (a type of entity thai is rarely subject to antitrust scrutiny) in 2019, it set up a for-profit subsidiary, in which Microsoft is reported to own a 49 per cent stake. It’s also been reported that Microsoft is prepared to invest more than $10 billion into the startup.
In The US?
Although the above points relate to the UK, the US Federal Trade Commission (FTC) is also reported to be examining the nature of Microsoft’s investment in ChatGPT maker OpenAI in relation to whether it may violate antitrust laws but hasn’t yet opened a formal investigation.
What Does Microsoft Say?
Microsoft has stated publicly that it doesn’t own any part of OpenAI. Company spokesman, Frank Shaw, said: “While details of our agreement remain confidential, it is important to note that Microsoft does not own any portion of OpenAI and is simply entitled to share of profit distributions”.
Meaning?
Microsoft’s statement that it doesn’t own any part of OpenAI and is merely entitled to a share of profit distributions addresses only one facet of potential antitrust concerns, i.e. mainly the question of ownership. However, antitrust issues often encompass more than just ownership stakes. They can involve questions of influence, control, or exclusive agreements that might affect market competition.
Regulators may still be interested in the broader implications of the Microsoft-OpenAI relationship. This could include the extent of influence that Microsoft might have over OpenAI’s decisions, the potential for their partnership to impact market dynamics in the AI sector, or any exclusive benefits Microsoft might gain. The focus of antitrust authorities, therefore, often extends to how such partnerships influence market fairness, innovation, and consumer choice.
What Does This Mean For Your Business?
In the aftermath of the boardroom changes at OpenAI, including the dramatic sacking and reinstatement of CEO Sam Altman, the antitrust spotlight has turned to the intricate relationship between Microsoft and OpenAI. This scrutiny, in both the US and UK, may go beyond just speculation of a merger and is likely to look at broader concerns of influence and control within the fast-evolving AI sector. The investigations are, therefore, part of a regulatory interest in ensuring competitive fairness in the fast-growing and evolving AI industry.
For businesses, this could translate into an era of increased oversight on AI collaborations and investments and regulators’ concerns over the concentration of power in the AI industry signals a need for businesses to be cautious. The focus is not just on maintaining competitive markets but also on preventing any monopolistic control over emerging and critical technologies like AI. This evolving regulatory landscape indicates that businesses need to consider the broader implications of their strategic partnerships beyond mere ownership stakes.
Microsoft’s assertion that it doesn’t own any part of OpenAI and is only entitled to profit distributions addresses direct ownership concerns but doesn’t fully alleviate antitrust concerns. The nature of their collaboration, potential influence on business decisions, and any exclusive benefits or access could still be under scrutiny.
The parallel inquiries by the FTC in the US and the CMA also appear to suggest a harmonised approach towards regulating major AI partnerships and means that companies operating transnationally in the AI space must be aware of regulatory developments in multiple jurisdictions. The CMA’s investigation into whether the Microsoft-OpenAI partnership has created a “relevant merger situation” under the Enterprise Act 2002, and its potential impact on market competition, could also set precedents affecting future tech collaborations.
Tech Insight : New Privacy Features For Facebook and Instagram
Meta has announced the start of a roll-out of default end-to-end encryption for all personal chats and calls via Messenger and Facebook, with a view to making them more private and secure.
Extra Layer Of Security and Privacy
Meta says that despite it being an optional feature since 2016, making it the default has “taken years to deliver” but will provide an extra layer of security. Meta highlights the benefits of default end-to-end encryption saying that “messages and calls with friends and family are protected from the moment they leave your device to the moment they reach the receiver’s device” and that “nobody, including Meta, can see what’s sent or said, unless you choose to report a message to us.“
Default end-to-end encryption will roll-out to Facebook first and then to Instagram later, after the Messenger upgrade is completed.
Not Just Security and Privacy
Meta is also keen to highlight the other benefits of its new default version of end-to-end encryption for users which include additional functionality, such as the ability to edit messages, higher media quality, and disappearing messages. For example:
– Users can edit messages that may have been sent too soon, or that they’d simply like to change, for up to 15 minutes after the messages have been sent.
– Disappearing messages on Messenger will now last for 24 hours after being sent, and Meta says it’s improving the interface to make it easier to tell when ‘disappearing messages’ is turned on.
– To retain privacy and reduce pressure on users to feel like they need to respond to messages immediately, Meta’s new read receipt control allows users to decide if they want others to see when they’ve read their messages.
When?
Considering that Facebook Messenger has approximately 1 billion users worldwide, the roll-out could take months.
Why Has It Taken So Long To Introduce?
Meta says it’s taken so long (7 years) to introduce because its engineers, cryptographers, designers, policy experts and product managers have had to rebuild Messenger features from the ground up using the Signal protocol and Meta’s own Labyrinth protocol.
Also, Meta had intended to introduce default end-to-end encryption back in 2022 but had to delay its launch over concerns that it could prevent Meta detecting child abuse on its platform.
Other Messaging Apps Already Have It
Other messaging apps that have already introduced default end-to-end encryption include Meta-owned WhatsApp (in 2016), and Signal Foundation’s Signal messaging service which has also been upgraded to guard against future encryption-breaking attacks (as much you realistically can), e.g. quantum computer encryption cracking.
Issues
There are several issues involved with the introduction of end-to-end encryption in messaging apps. For example:
– Governments have long wanted to force tech companies to introduce ‘back doors’ to their apps using the argument that they need to monitor content for criminal activity and dangerous behaviour, including terrorism, child sexual abuse and grooming, hate speech, criminal gang communications, and more. Unfortunately, creating a ‘back door’ destroys privacy, leaves users open to other risks (e.g. hackers) and reduces trust between users and the app owners.
– Attempted legal pressure has been applied to apps like WhatsApp and Facebook Messenger, such as the UK’s Online Safety Act. The UK government wanted to have the ability to securely scan encrypted messages sent on Signal and WhatsApp as part of the law but has admitted that this can’t happen because the technology to do so doesn’t exist (yet).
There are many compelling arguments for having (default) end-to-end encryption in messaging apps, such as:
– Consumer protection, i.e. it safeguards financial information during online banking and shopping, preventing unauthorised access and misuse.
– Business security, e.g. when used in WhatsApp and VPNs, encryption protects sensitive corporate data, ensuring data privacy and reducing cybercrime risks.
– Safe Communication in conflict zones (as highlighted by Ukraine). For example, encryption can facilitate secure, reliable communication in war-torn areas, aiding in broadcasting appeals, organising relief, combating disinformation, and protecting individuals from surveillance and tracking by hostile forces.
– Ensuring the safety of journalists and activists, particularly in environments with censorship or oppressive regimes, by keeping information channels secure and private.
– However, for most people using Facebook’s Messenger app, encryption is simply more of a general reassurance.
What Does This Mean For Your Business?
For Meta, the roll-out of default end-to-end encryption for Facebook and Instagram has been a bit of a slog and a long time coming. However, its introduction to bring FB Messenger in line with Meta’s popular WhatsApp essentially enhances user privacy and security and helps Facebook to claw its way back a little towards positioning itself as a company that’s a strong(er) advocate for digital safety.
For UK businesses, this move offers enhanced protection for sensitive data and communication, aligning with growing demands for cyber security and providing some peace of mind. However, the move presents further challenges and frustration for law enforcement and the UK government, potentially complicating efforts to monitor criminal activities and enforce regulations like the Online Safety Act. Overall, the initiative could be said to underscore a broader trend towards prioritising user privacy and security in the digital landscape, as well as being another way for tech giants like Meta to compete with other apps like Signal. It’s also a way for Meta to demonstrate that it won’t be forced into bowing to government pressure that could destroy the integrity and competitiveness of its products and negatively affect user trust in its brand (which has taken a battering in recent years).
Tech News : EU’s AI Regulations Agreed
Following 36 hours of talks, EU officials have finally reached a historic provisional deal on laws to regulate the use of artificial intelligence.
The Artificial Intelligence Act
The Council presidency and the European Parliament’s negotiators’ provisional agreement relates to the proposal on harmonised rules on artificial intelligence (AI), the so-called artificial intelligence act.
The EU says the main idea behind the rules is to regulate AI based on its capacity to cause harm to society, i.e. following a ‘risk-based’ approach: the higher the risk, the stricter the rules.
Protection & Stimulating Investment
The comprehensive, world-first draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. The hope is that this will also help stimulate investment and innovation in AI within Europe.
The Key Elements
Some of the key elements in the draft AI act include:
– Rules relating to high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems.
– A revised system of governance with some enforcement powers at EU level.
– The extension of a list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards.
– Improved protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.
The Key Aspects
The new EU Artificial Intelligence Act covers several key aspects:
– Clarifying the definitions and scope of the proposed act. For example, the definition of an AI system aligns with the Organisation for Economic Co-operation and Development’s (OECD) approach, providing clear criteria to distinguish AI from simpler software. The regulation excludes areas outside EU law, national security, military/defence purposes, and AI used solely for research, innovation, or non-professional reasons.
– The classification of AI systems and prohibited practices. AI systems are classified into high-risk and limited-risk categories. High-risk AI systems must meet certain requirements and obligations for EU market access, while limited-risk ones have lighter transparency obligations. The act bans AI practices considered unacceptable, like cognitive behavioural manipulation and untargeted facial image scraping.
– Any law enforcement exceptions. For example, the draft rules include any specific provisions allowing law enforcement to use AI with safeguards, including emergency deployment of high-risk AI tools and restricted use of real-time remote biometric identification.
– New rules addressing general-purpose AI (GPAI) systems and foundation models, with specific transparency obligations and a stricter regime for high-impact foundation models.
– A new governance architecture. An AI Office within the Commission will oversee advanced AI models, supported by a scientific panel. The AI Board, comprising member states’ representatives, will coordinate and advise, complemented by an advisory forum for stakeholders.
– Penalties. Fines for violations are set as a percentage of the offending company’s global annual turnover or a predetermined amount, with caps for SMEs and startups.
– Rules around transparency and protection of fundamental rights. For example, high-risk AI systems require a fundamental rights impact assessment before market deployment, while increased transparency is mandated, especially for public entities using such systems.
– Measures in support of innovation including AI regulatory sandboxes for real-world testing and specific conditions and safeguards for AI system testing. The act also aims to reduce the administrative burden for smaller companies.
EU Pleased
The comments of Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence, highlight how pleased the EU is that it’s managed to be first to at least put a provisional, draft set of regulations together. As she says on the EU’s Council of the EU pages: “This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.”
More Than Two Years Away
However, despite the three days of negotiations and the announced provisional rules it’s understood that the AI act (which they will lead to) won’t apply until two years after it comes into force (with some exceptions for specific provisions). Given that it’s just over a year since ChatGPT was released and that in that short time we’ve also seen the release of OpenAI’s Dall-E, Microsoft’s Copilot, Google’s Bard and Duet (and now its Gemini AI model), X’s Grok, and Amazon’s Q, you can’t help thinking that effective regulation of AI looks like it will stay some way behind the rapidly advancing and evolving technology for some time yet.
Criticism
The idea of putting the AI act together for the EU got a negative response back in June when it was criticised by 150 executives in an open letter representing many well-known companies including Renault, Heineken, and Airbus. Some of the criticisms included were that the rules are too strict, are ineffective, and could negatively impact competition and opportunity and undermine the EU’s technological ambitions.
What Does This Mean For Your Business?
The provisional agreement on the EU’s Artificial Intelligence Act is a double-edged sword for businesses in the AI sector. On one hand, it establishes a framework for regulating AI technologies, yeton the other, its long gestation period and potential for stringent regulations have raised concerns about its possible impact on innovation and competition for the EU.
The Act’s implementation timeline is actually a crucial factor for businesses. For example, the new regulations won’t come into force until at least two years after being finalised, thereby creating a window of uncertainty. During this period, AI technology will continue to evolve rapidly, most likely outpacing the regulations being put into place. This could all lead to a regulatory framework that is outdated by the time it is implemented, potentially stifling innovation and putting the EU at a technological disadvantage compared to other regions that may have more agile or less restrictive approaches.
Also, the Act’s stringent rules, particularly for high-risk AI systems, could impose significant compliance burdens on businesses. While these measures are intended to ensure safety and ethical use of AI, there is a risk that they might be too restrictive, hampering the ability of European companies to innovate and compete globally. Over-regulation, therefore, could deter investment in the AI sector, hindering the EU’s technological ambitions and possibly leading to a competitive disadvantage in the global AI landscape.
The balance between regulation and innovation is therefore a delicate one. While (what will become) the Act aims to protect fundamental rights and ensure the ethical use of AI, it also needs to foster an environment conducive to technological advancement. If the regulations are perceived as overly burdensome or inflexible, they could inhibit the growth and competitiveness of EU-based AI companies, impacting the broader European technology sector.
The EU’s AI Act may be a significant step towards regulating emerging technologies, but its success will largely depend on its ability to strike the right balance between safeguarding ethical standards and supporting innovation and competitiveness in the AI industry. Businesses must, therefore, prepare for a landscape that could change significantly in the coming years, staying agile and adaptable to navigate these upcoming regulatory challenges effectively.