An Apple Byte : Apple Facing (Alleged) Employee Spying Lawsuit
Apple Inc. is reportedly facing a lawsuit from employee Amar Bhakta, who alleges the company unlawfully monitors employees’ personal devices and iCloud accounts.
Filed in California state court, the lawsuit claims Apple mandates employees to install software on personal devices used for work, granting the company access to personal data such as emails, photos, and health information. Also, it alleges that Apple’s policies permit surveillance of employees even when off duty and within their homes.
Bhakta, part of Apple’s digital advertising team since 2020, asserts that these policies have adversely affected his career opportunities. He claims he was prohibited from public speaking engagements related to digital advertising and was instructed to remove job-related information from his LinkedIn profile.
Apple has refuted the allegations, stating that the claims lack merit. The company has emphasised its commitment to employee rights, noting that all employees receive annual training on their rights to discuss wages, hours, and working conditions.
This case highlights the ongoing debate over employer access to employees’ personal devices and data, especially as the lines between work and personal life become increasingly blurred. For businesses, it highlights the importance of establishing clear, lawful policies regarding employee privacy and device usage to maintain trust and comply with legal standards.
Also, the outcome of this lawsuit could set a significant precedent, potentially influencing corporate practices concerning employee monitoring and privacy rights, and prompting companies to reassess their policies to ensure they respect employee privacy while safeguarding corporate interests.
Security Stop Press : TikTok Interference Annuls Romanian Elections
Romania’s Constitutional Court has annulled the first round of its presidential election due to allegations of Russian interference via TikTok.
Far-right candidate Călin Georgescu, a pro-Russian figure, had won with 23 per cent of the vote. Intelligence revealed a sophisticated influence operation involving over 25,000 TikTok accounts, which amplified Georgescu’s campaign, garnering 52 million video views in just four days. TikTok denies any preferential treatment but is under EU scrutiny to provide data on its algorithms and counter-disinformation measures.
Outgoing Prime Minister Marcel Ciolacu supported the annulment as vital for national security, while Georgescu called it a “formalised coup d’état.” The annulment leaves Romania in political limbo, with no set timeline for a new election.
This case highlights the risk of foreign interference via social media. Businesses should bolster cybersecurity, monitor for disinformation, and collaborate with platforms to counter coordinated inauthentic behaviour and protect digital integrity.
Sustainability-in-Tech : Carbon-Removal Material Trialled In Data Centre
Amazon Web Services (AWS) is to pilot a new AI-designed carbon-removal material at one of its data centres as part of a new strategic partnership with AI start-up Orbital Materials.
Why?
As data processing and storage requirements increase, data centres must handle increasingly complex AI workloads, pushing their energy and cooling demands ever higher. AWS, like other operators, has set ambitious carbon reduction targets, but purchasing offsets can be costly and less transparent. In a new move, partnering with Orbital and integrating a new carbon-removal material at an AWS data centre by 2025, the company is aiming to directly remove more CO₂ from its airflow than it produces, potentially at a lower cost than traditional offsets. It’s hoped that this approach will not only help AWS meet its sustainability commitments but also address the escalating operational and environmental pressures driving these changes.
Who is Orbital and What is the AWS Deal?
Orbital, launched at the end of 2022 and led by CEO Jonathan Godwin, operates from facilities in Princeton, New Jersey and London. The start-up uses an AI-driven platform to rapidly discover and test advanced materials for climate-focused solutions (work that would traditionally take years in a lab). According to Amazon’s website, since establishing its research and development lab in early 2024, Orbital has seen a tenfold improvement in its carbon-removal material’s performance, highlighting the revolutionary potential of AI-driven materials discovery.
Through its multi-year partnership with AWS, Orbital will supply a carbon-removal material for integration at an AWS data centre by 2025. The goal is to capture more CO₂ than the facility emits, helping AWS meet its carbon reduction targets and potentially offering a more cost-effective, transparent alternative to traditional offsets.
Carbon Removal at the Source
The principal idea behind the AWS–Orbital collaboration is to use data centres themselves as a platform for direct carbon capture. Data centres rely on vast, sophisticated cooling systems to maintain the optimal temperatures required by the thousands of servers inside. These cooling systems constantly circulate large volumes of air, providing an excellent opportunity to integrate a carbon-removal material that can filter out CO₂ molecules as they flow through.
How Does Orbital’s Carbon Removal Material Work?
Orbital’s CEO, Jonathan Godwin, recently explained the nature of the advanced carbon-removing material it produces, describing it as “like a sponge at the atomic level”. For example, the material’s tiny cavities are sized to interact specifically with CO₂, thereby allowing it to trap the gas while letting other, less harmful components of the air pass freely. By 2025, AWS plans to pilot this cutting-edge carbon-removal technology in one of its data centres, testing its scalability and real-world performance.
A More Cost-Effective Alternative
While some operators resort to carbon offsets to reduce their net emissions, these can be expensive and often involve complex verification processes. By capturing carbon directly from the air at the source, data centres could theoretically bypass intermediaries and reduce their reliance on offset markets. According to Jonathan Godwin, the added cost of incorporating Orbital’s carbon-removal material amounts to roughly 10 per cent of the hourly charge of renting a GPU chip for AI training, significantly less than the price of most carbon offsets. This cost-effectiveness could make the proposition commercially attractive, helping data centre operators improve their environmental performance without eroding their bottom line.
Efficiency and Water Usage
While reducing CO₂ emissions is a crucial goal, the AWS–Orbital partnership also aims to tackle other environmental challenges associated with large-scale computing infrastructure. For example, data centres are thirsty operations, requiring huge amounts of water to maintain their cooling systems. Therefore, the ability to integrate more efficient, high-performance materials into cooling processes could lead to reductions in both energy and water consumption.
Speaking about the partnership (on the Amazon website), Orbital’s CEO Jonathan Godwin said, “Our partnership with AWS will accelerate the deployment of our advanced technologies for data centre decarbonisation and efficiency. Working with the market-leading AWS team will accelerate our development of products in cooling, water utilisation, and carbon removal.” In a similar vein, Howard Gefen, General Manager of AWS Energy & Utilities, stated, “AWS looks forward to collaborating with Orbital and their mission to drive data centre decarbonisation and efficiency.”
By designing materials that can capture carbon, improve cooling efficiency, and potentially reduce water consumption, Orbital’s platform looks as though it could open new pathways for sustainable data centre operations. The success of these early trials could lead the way to more widespread adoption of such materials throughout the data centre industry.
Technical and Logistical Challenges
Of course, the introduction of any new technology brings its own challenges. For example, trying to integrate an advanced filtration material into a complex data centre cooling system will alter airflow characteristics. Although this change could increase the workload on existing fans and pumps, Orbital believes the net effect will be positive. Also, the slightly higher energy required for pumping air through the new filters should be more than compensated for by the benefits of lower emissions and improved resource efficiency.
Another pressing consideration is handling the captured CO₂. Once the gas is isolated from the airstream, what then? While specific details of exactly how the carbon will be stored or reused are currently not being made clear, the partners will, no doubt, need robust protocols for managing the extracted greenhouse gases sustainably. Ensuring safe, long-term storage or practical utilisation of this captured carbon is likely to be key to the project’s overall success.
Not The Only One Involved In Data Centre Carbon Capture
It should be noted here that Orbital is not alone in pursuing on-site carbon capture in data centres. For example, other tech giants such as Alphabet (Google) and Meta have filed patents related to similar concepts, and start-ups like 280 Earth are also working on solutions to tackle data centre emissions at source. However, what appears to distinguish Orbital’s approach is its ability to move fast and iterate quickly. By using generative AI to design and test materials virtually, Orbital can arrive at promising formulations far faster than traditional lab-based methods.
This accelerated materials discovery process looks like giving Orbital a potential edge in developing specialised compounds. For example, its carbon-removal material is tailored to work effectively with hot, CO₂-laden air exiting data centre servers. Rather than building a generic carbon filter, Orbital can produce optimised materials that function well under real-world operational conditions.
Wider Applications and Open Access to AI Models
Beyond this single pilot project, Orbital’s technology could also have a much broader impact. For example, the start-up plans to make its open-source AI model ‘Orb’ available to AWS customers via Amazon SageMaker JumpStart and AWS Marketplace. This means that other companies tackling their own materials and climate challenges, whether in semiconductors, batteries, or electronics, will soon have a powerful new tool at their disposal.
Such accessibility is critical. Orbital’s AI-driven approach, therefore, does not appear to just offer one clever solution to a pressing sustainability issue, but could represent a new methodology for discovering and optimising advanced materials. By making these capabilities available in the cloud, Orbital and AWS hope to democratise materials R&D, thereby, hopefully, empowering a wider range of enterprises to contribute to sustainability-driven innovation.
Keeping Pace with Sustainability Targets
The urgency driving projects also comes from the large technology companies having pledged to reach net-zero carbon emissions within the coming decades. Yet, as AI models grow more complex, requiring ever more computational power, energy usage soars. Without new interventions, these data centres risk undermining carefully set climate targets.
AWS, as the world’s largest cloud-computing provider by revenue, is under particular scrutiny. Millions of customers rely on its infrastructure, and sustainability commitments have become a point of competitive differentiation. By embracing on-site carbon capture and making advanced materials more accessible, AWS is banking on not only working to meet its own targets but potentially setting a precedent that others in the industry may follow.
Potential Ripple Effects Across the Sector
If the AWS pilot proves successful, it could catalyse a wave of adoption in data centres across the globe. On-site carbon capture may offer a more transparent and reliable way of verifying emissions reductions than conventional offsets. It might even allow data centre operators to generate their own carbon credits by capturing more CO₂ than they produce, thereby transforming a cost centre into a revenue stream.
Such a shift would, however, require careful economic, regulatory, and environmental considerations. For now, the AWS–Orbital initiative is a test (albeit part of a “multi-year” commitment), but one that carries high stakes and considerable promise. This early pilot could be said to represent a proactive step towards embedding sustainability at the heart of AI-driven infrastructure and an opportunity to ensure that the digital revolution does not come at an unacceptable environmental cost.
What Does This Mean For Your Organisation?
In many ways, the AWS–Orbital pilot project encapsulates the evolving relationship between digital infrastructure and the urgent need to address our environmental responsibilities. By attempting to capture carbon on-site rather than relying solely on offsets, AWS is exploring a pathway that could be more transparent, cost-effective, and efficient. Orbital’s rapid, AI-driven approach to materials discovery highlights a significant shift in how quickly breakthroughs can be achieved, and the involvement of AWS, arguably one of the most influential players in the sector, puts added weight behind this experimentation.
However, the path forward is not going to be without its hurdles. For example, integrating new materials into data centres, ensuring that carbon can be meaningfully stored or reused, and consistently meeting demanding performance standards will all require careful planning and meticulous execution. Also, the costs, although promising at present, are likely to evolve alongside technological improvements and market conditions, meaning that careful economic analysis will remain crucial.
Beyond this specific partnership (due to last an unspecified, but probably a small number of years), it suggests that the integration of advanced materials and AI-driven R&D could help carve out a more sustainable future for data centres worldwide.
Video Update : Talking To ChatGPT : An Example
As talking to AI is set to become mainstream, this video takes a look at speaking directly with ChatGPT and includes a couple of examples of what it can be used for. It’s expected this will mode of interaction will become second nature very soon.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – Optimise Email Signatures with Transparent Images for Dark Mode
With many users adopting dark mode in the new Microsoft Outlook, including a transparent background for images such as logos or signatures ensures your emails look professional across both light and dark themes.
Why It Matters:
– Many users enable dark mode, which displays emails on a black or dark-coloured background.
– Images with a white background appear as distracting blocks against the dark background, disrupting the email’s appearance.
– Transparent images seamlessly adapt to both light and dark modes, maintaining a polished look.
How to Use Transparent Background Images:
Create a Transparent Image:
– Use a design tool (e.g., Photoshop, Canva, or any free online editor) to remove the background from your image.
– Save the file as a PNG format to preserve transparency.
Add the Transparent Image to Your Email Signature:
– Open Outlook and go to Settings > View all Outlook settings > Mail > Compose and reply.
– Under the Email signature section, upload the transparent PNG image and adjust its placement.
Test Your Email in Dark Mode:
– Send yourself a test email and view it in both light and dark modes to ensure the image displays correctly.
Featured Article : Oz Social Media Ban For Kids
Australia’s government has enacted legislation prohibiting children under 16 from accessing social media platforms to protect them from the harmful effects of online content, such as cyberbullying, exploitation, and exposure to inappropriate material.
The Online Safety Amendment
Under the new legislation, known as the Online Safety Amendment (Social Media Minimum Age) Bill 2024, only just passed by the Australian Parliament in November, the ban will apply to social media platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, and X (formerly Twitter). However, messaging services, gaming platforms, and educational sites like YouTube are exempt from these restrictions, reflecting their different usage and content dynamics.
Toughest Laws
Australia’s decision to enact what are now the world’s toughest social media regulations has ignited a global debate about the role of social media in young people’s lives and the responsibility of tech companies in safeguarding their well-being.
Why Has Australia Taken This Step?
The new legislation, which has been championed by Prime Minister Anthony Albanese, is seen as a necessary measure to protect children from the “harms” of social media. It addresses growing concerns about the impact of online platforms on young people’s mental and physical health, including issues like cyberbullying, exposure to inappropriate content, and the addictive nature of these apps.
As Prime Minister Albanese says, “Parents deserve to know we have their backs,” highlighting the emotional toll on families struggling to manage their children’s online activity. A YouGov poll, for example, has revealed that 77 per cent of Australians support the ban, reflecting a national consensus on the need for tighter controls.
The decision follows mounting evidence of the detrimental effects of social media on young users. A recent survey by the charity Stem4 revealed that 86 per cent of people aged 12 to 21 are worried about the negative impact of social media on their mental health. Specific concerns include cyberbullying, scams, predatory behaviour, and harmful content promoting self-harm or disordered eating. These issues have, in tragic cases, contributed to young people’s deaths by suicide, amplifying calls for decisive action.
What Does the Law Actually Entail?
The new legislation mandates that social media platforms must prevent under-16s from accessing their services within 12 months. Non-compliance could result in fines of up to AUD 50 million (£25.7 million). The ban will apply to platforms like X (formerly Twitter), Instagram, TikTok, Snapchat, and Facebook, while sites like YouTube and LinkedIn have been excluded due to their nature (or existing restrictions).
Enforcement
Enforcement will be overseen by the eSafety Commissioner, with age verification technology expected to play a crucial role. However, details about the specific mechanisms remain unclear, sparking concerns about feasibility and privacy. Critics argue that without robust and reliable technology, such as biometric checks or ID-based verification, children could easily bypass restrictions using virtual private networks (VPNs) or fake accounts.
Unlike similar laws in other countries, Australia’s ban provides no exemptions for parental consent or existing users, making it the most stringent to date.
The Global Context and Potential Impact
With this move, Australia now joins a growing list of countries seeking to regulate social media access for young people. For example, Ireland and Spain already enforce a minimum age of 16, while France requires parental consent for under-15s to join such platforms. However, research has shown that children frequently circumvent these restrictions, raising doubts about their effectiveness.
In the UK, for example, the issue of underage social media use has also drawn significant attention. A survey by Ofcom, the UK’s media regulator, found that 22 per cent of children aged 8 to 17 lie about their age to access adult accounts. The lack of effective age verification has led to widespread exposure to harmful content. The Online Safety Act, due to take effect in 2025, will require platforms to implement stricter age verification, though critics argue it does not go far enough.
Could The UK Introduce Similar Legislation?
In response to Australia’s ban on social media for under-16s, UK Technology Secretary Peter Kyle has indicated similar measures are “on the table” but has emphasised the need for careful consideration to avoid unintended consequences.
The Online Safety Act 2023 in the UK already requires social media platforms to implement age restrictions and robust verification systems to protect children, but the government is exploring additional steps, including research into the impact of social media on young people, signalling possible stricter regulations.
Critics have warned, however, that bans could push children to unregulated platforms or lead to falsified ages, complicating enforcement, while also raising concerns about limiting access to information and social connection. The UK government is, therefore, proceeding cautiously, consulting widely to balance online safety with preserving children’s digital freedoms.
Responses from Tech Companies
It’s perhaps no surprise that the new Australian law has met with fierce resistance from tech giants. Companies like Meta (owner of Facebook and Instagram), Snap (the parent company of Snapchat), and TikTok have criticised the legislation as vague and impractical. Meta argues that the law “ignores evidence” from child safety experts and fails to address its stated goal of protecting young users.
LinkedIn, however, has taken a different stance, asserting that its professional networking platform is “too dull for kids” and does not attract underage users. By distancing itself from mainstream social media, LinkedIn appears to be hoping to avoid the logistical and financial burden of implementing age verification measures.
TikTok Australia has also raised concerns about the government’s approach, warning of “unintended consequences” stemming from rushed implementation. The platform’s submission to lawmakers stressed the need for more research and collaboration to develop effective solutions.
Challenges and Criticisms
While many support the ban as a necessary step to protect children, others have labelled it a “blunt instrument” that oversimplifies a complex issue. Critics point out several challenges, including:
– Privacy risks. The reliance on age verification technology raises significant privacy concerns. Biometric or ID-based systems could compromise users’ personal data, creating new vulnerabilities.
– Ineffectiveness. Past attempts to restrict social media access have often been undermined by tech-savvy youths. VPNs, fake accounts, and shared logins enable children to bypass restrictions, potentially driving them towards less regulated corners of the internet.
– Exclusion of young voices. Advocacy groups like the eSafety Youth Council have criticised the Australian government for excluding young people from the legislative process. They argue that teenagers, as primary stakeholders, should have a say in shaping policies that directly affect them.
– Potential for social isolation. For many young people, social media serves as a primary mode of communication and community-building. Removing access could exacerbate feelings of isolation, particularly for those in remote or marginalised communities.
– Impact on parents. The ban places significant responsibility on parents to enforce the rules, even as they grapple with the practicalities of managing their children’s online activity.
A Growing Global Debate
Australia’s legislation has undoubtedly set a precedent, prompting other nations to re-evaluate their own policies. Norway, for example, has already expressed interest in adopting similar measures, while France and the UK are monitoring the situation closely. The debate highlights the delicate balance between protecting young people and preserving their autonomy in an increasingly digital world.
As the world watches Australia’s bold experiment, it’s clear that the conversation about children and social media is far from over. Whether other countries will follow suit remains to be seen, but the spotlight is firmly on the responsibilities of tech companies, governments, and parents in shaping a safer online future for the next generation.
What Does This Mean For Your Business?
Australia’s groundbreaking legislation banning under-16s from social media represents a bold attempt to address the pressing challenges of unregulated online access for young people. By setting the strictest age limits globally, the country has ignited a conversation about the risks of social media, the responsibilities of tech companies, and the role of governments in safeguarding children.
Supporters view the move as a necessary step to combat issues like cyberbullying, exploitation, and harmful content, prioritising children’s well-being over corporate interests. However, it also presents significant challenges for social media companies, which must invest in robust age-verification systems and may lose a vital demographic that drives engagement and growth. Advertisers, too, are likely to feel the impact, particularly in industries targeting younger audiences. Businesses dependent on social media for branding and sales may need to rethink strategies, especially those aimed at families and younger consumers.
Critics warn that the policy may push children to unregulated platforms, complicate enforcement, and raise privacy concerns while limiting access to digital spaces that play a role in communication and learning. Internationally, the legislation has sparked interest, with nations including the UK monitoring its progress while recognising the complexities of similar measures.
Australia’s decision, therefore, challenges governments, tech companies, and society to rethink how children engage with social media. Its success or failure will influence global debates on online safety, shaping not only protections for young users but also the futures of businesses and advertisers online.