An Apple Byte : Apple in Workers’ Rights Dispute
The U.S. National Labour Relations Board (NLRB) has accused Apple of restricting employees’ rights to advocate for better conditions by limiting their use of social media and Slack and retaliating against those who raised concerns.
The NLRB’s allegations, filed this month, focus on Apple’s work rules regarding Slack and social media use. Apple, which introduced Slack to its employees several years ago, saw the platform grow as a key tool for workers to discuss workplace concerns, particularly during the COVID-19 pandemic. However, the NLRB says that Apple has since imposed restrictions on how workers can use the platform, undermining their ability to freely advocate for better working conditions.
The NLRB claims that Apple maintained illegal policies, including restricting the creation of new Slack channels without management’s approval and requiring employees to report workplace concerns directly to a manager or designated support team. The complaint also includes accusations of Apple sacking an employee for workplace activism and pressuring another to delete a social media post, actions which the NLRB claims violate labour laws.
The complaint is part of a broader pattern, with Apple facing a similar NLRB complaint just a week earlier, accusing it of enforcing overly broad confidentiality, nondisclosure, and noncompete agreements that limited workers’ rights. In both cases, Apple has denied the accusations. An Apple spokesperson is reported as saying that Apple is committed to providing “a positive and inclusive workplace”, takes employee concerns seriously, and strongly disagrees with the NLRB’s claims.
The current case stems from a 2021 complaint filed by former Apple employee Janneke Parrish, who claims she was sacked in retaliation for her role in workplace activism. Parrish had used Slack and social media to organise efforts around remote work, pay equity, and discrimination at Apple. Parrish’s lawyer argues that Apple engaged in “extensive violations” of workers’ rights, asserting that employees were punished for raising critical workplace issues, particularly around gender and racial discrimination.
If Apple and the NLRB can’t reach a settlement on the matter, it looks likely that the case will go to a hearing before an administrative judge in February. The outcome of the hearing could set a significant precedent for the rights of employees in the tech industry, with broader implications for how companies handle worker communication and activism.
Security Stop Press : China-Backed Hackers Breach Telecoms Wiretap Systems
China-backed hackers have breached the wiretap systems of several major U.S. telecom and internet providers, exposing critical vulnerabilities and likely collecting vast amounts of internet traffic to gather intelligence on Americans.
These wiretap systems, required by the 1994 Communications Assistance for Law Enforcement Act (CALEA), grant authorised personnel (e.g. law enforcement agencies) almost unfettered access to user data, including internet traffic and browsing histories. However, these systems have long been viewed as security risks, with experts warning of their potential misuse. For example, Georgetown Law professor Matt Blaze called the breach “inevitable,” highlighting the inherent dangers of building backdoors meant for lawful purposes, which are prone to exploitation by malicious actors.
The Wall Street Journal recently reported that the hacking group, known as ‘Salt Typhoon’, breached at least three of the largest U.S. providers – AT&T, Lumen, and Verizon – to access these systems. While the full extent of the damage remains unclear, some US national security sources have described the breach as potentially catastrophic. The hackers are thought to be positioning for future cyberattacks, possibly as part of tensions between the U.S. and China over Taiwan. The breach has reignited debate over the risks of government-mandated backdoors, with experts like Stanford’s Riana Pfefferkorn pointing out that such systems “jeopardise” rather than protect users.
The revelations come amidst growing global concern over government backdoors and encryption, with other countries, including those in the EU, also considering legislation that could weaken digital security. Signal president Meredith Whittaker echoed warnings that “there’s no way to build a backdoor that only the ‘good guys’ can use,” underscoring the wider implications of the breach.
To guard against the risk of such attacks, the advice for businesses is to use strong encryption, limit data access to the minimum necessary personnel, and continuously review and update security practices to close potential vulnerabilities in systems.
Sustainability-in-Tech : AI-Designed Bacteria Creates Rubber Alternative
Paris-based biotech startup BaCta, which has just secured €3.3 million in funding, produces natural rubber using genetically engineered bacteria, thereby offering a sustainable alternative to traditional rubber sources and synthetic, petroleum-based versions.
What’s The Problem With How We Get Rubber Now?
The current methods of rubber production present several significant environmental and sustainability issues. Synthetic rubber, which makes up about half of the global supply, is derived from petroleum-based chemicals. This process is highly energy-intensive and contributes heavily to CO2 emissions, exacerbating climate change. Also, synthetic rubber is non-biodegradable, meaning it persists in the environment, adding to the growing issue of plastic waste pollution.
Natural rubber, sourced from Hevea trees, is also not without its problems. While it may seem more environmentally friendly, the growing demand for rubber has driven deforestation in tropical regions, where land is cleared for plantations. This not only destroys vital ecosystems and reduces biodiversity but also releases significant amounts of carbon stored in trees and soil, further worsening climate change. Also, these rubber plantations are typically monocultures, which can degrade soil health and make crops more vulnerable to pests and disease.
Both forms of rubber production are under increasing pressure as manufacturers face stricter emissions regulations. The deforestation linked to natural rubber and the reliance on petrochemicals for synthetic rubber are incompatible with global sustainability goals. The industry also often suffers from supply chain instability, compounded by climate change and socio-political issues in rubber-producing regions.
Factors such as these have led to growing interest in alternatives like BaCta’s bioengineered rubber, which aims to offer a carbon-neutral, renewable solution that mitigates the environmental and ethical concerns associated with traditional rubber production.
How Does BaCta Make Rubber From Bacteria?
BaCta produces rubber using genetically engineered bacteria, specifically Escherichia coli. The process begins by feeding these bacteria a renewable feedstock, such as glucose, acetate, or even carbon directly captured from the atmosphere. Inside the bacteria, AI-designed enzymes transform the carbon source into isoprene, the key building block of rubber. The bacteria then polymerise the isoprene into natural rubber through a unique synthetic pathway. The resulting synthetic rubber is then extracted and purified. This method allows BaCta to create high-quality, carbon-neutral rubber without the environmental downsides of traditional methods, such as deforestation or petrochemical dependence.
Benefits
BaCta’s synthetic rubber offers several key benefits. For example:
– Carbon neutrality. The production process is designed to be carbon-neutral, and potentially even carbon-negative, significantly reducing the carbon footprint compared to traditional rubber production. BaCta says on its website that not using traditional rubber could mean, “More than 500 million tons eqCO2 could be removed every year”.
– It uses renewable feedstock. BaCta uses renewable sources like glucose, acetate, and carbon in its synthetic rubber production, thereby avoiding reliance on petroleum (used in synthetic rubber) or deforestation (linked to natural rubber).
– It’s hypoallergenic. By engineering the bacteria to remove specific proteins found in natural rubber (sap), BaCta’s rubber can be hypoallergenic, reducing the risk of allergic reactions.
– It’s sustainable. The process avoids the environmental issues of deforestation and land degradation associated with rubber plantations, making it a more sustainable option.
– It’s high quality. BaCta says its material is, “Superior quality Long chain, ultra-low impurity content, hypoallergenic rubber”.
– Cost competitiveness. BaCta aims to produce rubber at a price point that’s competitive with conventional rubber (at a “fixed price, no fluctuation, no uncertainty”), while delivering environmental benefits.
Has A Functioning ‘Proof of Concept’
BaCta has moved beyond the conceptual stage and already has a functioning proof of concept (PoC) for producing natural rubber using the engineered bacteria. That said, although company has successfully demonstrated the process in the lab, it is still in the early stages of scaling up production. Currently, BaCta is working on increasing its output, aiming to move from laboratory-scale production (milligrams of rubber) to industrial levels, with the next step being a pilot-scale operation involving larger fermenters.
Funding
The company recently secured €3.3 million in funding from investors including OVNI Capital, Kima Ventures, and several business angels. This funding is intended to support the scale-up process, helping BaCta transition from producing small batches to larger quantities needed for commercial use.
Rubber For What?
Initially, BaCta plans to start by targeting the luxury fashion industry, e.g. for use in the manufacture of premium shoes and bags, which requires smaller amounts of high-quality rubber, before expanding into more industrial applications.
What Does This Mean For Your Organisation?
BaCta’s innovative approach to rubber production could have far-reaching implications for the many industries that rely heavily on rubber. From automotive manufacturers, which use rubber for tyres, seals, and various components, to healthcare sectors that depend on rubber for gloves, tubing, and other essential products, the potential applications of BaCta’s sustainable rubber are vast. Although BaCta’s initial target is businesses in the fashion industry, by providing a carbon-neutral, renewable alternative to traditional rubber, BaCta can potentially offer businesses in many industries a chance to significantly reduce their environmental impact. This is especially important as industries face mounting pressure to meet stringent emissions regulations and consumer demand for sustainable products.
For businesses, switching to BaCta’s bioengineered rubber could mean not only reducing their carbon footprints but also gaining a competitive edge in a marketplace that increasingly values eco-friendly practices. With its ability to produce hypoallergenic, high-quality rubber that is cost-competitive with traditional options, BaCta’s product could easily replace conventional rubber without sacrificing performance or cost efficiency. Also, as supply chain disruptions and resource scarcity become more prevalent due to climate change, BaCta’s method, which bypasses the need for deforestation and petrochemicals, presents a more stable and sustainable alternative.
As BaCta scales up its production, it could also help businesses mitigate the risks associated with the volatility of traditional rubber supply chains, which are often subject to geopolitical tensions and environmental degradation. If widely adopted, this new form of rubber could lead to a significant reduction in global CO2 emissions and deforestation, offering industries a pathway to sustainable growth while aligning with global climate goals. BaCta’s synthetic rubber could, therefore, reshape the future of rubber-reliant industries, making sustainability a reality.
Video Update : 5 Ideas For Better AI Prompts
This video tutorial suggest five ideas to give better prompts to generative AI, resulting in better and more accurate results.
[Note – To Watch This Video without glitches/interruptions, It’s best to download it first].
Tech Tip – Use “Virtual Keyboard” for On-Screen Typing
The Virtual Keyboard can be a lifesaver if your physical keyboard is malfunctioning or unavailable, allowing you to type directly on the screen with your mouse or touchscreen. Here’s how to access it:
To Enable the Virtual Keyboard
– Press Win + S and type ‘On-Screen Keyboard’, then select the app from the search results.
Use the Virtual Keyboard
– A keyboard will appear on your screen, allowing you to type using your mouse or by tapping the screen (if your device is touchscreen-enabled).
– This feature is helpful when you’re working remotely or troubleshooting hardware issues.
Featured Article : AI Safety-Bill Killed (Well, Blocked)
Following California Governor Gavin Newsom vetoing a landmark AI safety bill aimed at regulating the development and deployment of advanced AI systems, we look at the reasons why it was blocked and the implications of doing so.
What Bill?
US Senate ‘Bill 1047’ relates to regulating AI systems with the focus specifically around frontier AI models (highly advanced/cutting edge and large) with the potential for large-scale impact.
California
The fact that California is home to major AI companies like OpenAI (which also partners with Microsoft), and its governor vetoed it, means there are implications for the future of AI governance and industry practices worldwide. For example, Gavin Newsom said in his statement about the bill “California is home to 32 of the world’s 50 leading Al companies, pioneers in one of the most significant technological advances in modern history”.
The Key Points
The key points of the bill were:
– Risk mitigation for frontier AI models. The bill targeted large AI systems, particularly those that required significant computational power to develop (at least 10^26 FLOPS). It required companies developing such systems to implement safeguards to prevent catastrophic harm, including the misuse of AI for creating weapons of mass destruction, committing serious crimes like murder, or launching cyberattacks that could cause significant damage (e.g. over $500 million).
– A “kill switch requirement”. Under the bill, developers would have been required to implement a “kill switch” mechanism to immediately halt the operations of AI models if they posed a threat, during both training and usage.
– Cybersecurity measures. Companies were required to have strict cybersecurity protocols in place to prevent the unauthorised use or modification of these powerful AI systems.
– Oversight and reporting. The bill proposed the creation of a “Board of Frontier Models”, a new state entity, to oversee the compliance of these companies with the safety measures. Regular audits and detailed reports on safety protocols were part of the requirements.
– Whistleblower protections. The bill also included protections for employees who reported non-compliance within their organisations.
Opposition From Big Tech Companies
Major tech companies, including OpenAI, Google, and Meta, strongly opposed the bill, arguing that the regulations could significantly slow down innovation and hinder the deployment of beneficial AI technologies. With these companies being invested heavily in the development of AI, viewing it as a key future revenue source, it’s perhaps not surprising that they saw the bill as a threat to that potential. Also, there were concerns within the tech community that open-source AI models, which often rely on collaborative, decentralised development, could face legal liabilities under the bill’s stringent requirements. This risk, they argued, could discourage further development of open-source AI, which has been an important driver of innovation in the field. The tech giants feared that the bill’s overly strict regulations could stifle growth and limit the industry’s ability to remain competitive globally.
Why Was The Bill Vetoed By Newsom?
In an official statement, the California state governor gave the following main reasons for blocking the bill:
– An overly narrow focus. Newsom argued that the bill only targeted large, expensive AI models based on their computational scale and costs, which could give a false sense of security. He pointed out that smaller, specialised models could pose similar risks but were not covered by the bill.
– A lack of adaptability. The governor emphasised that AI is evolving rapidly, and the bill’s framework was too rigid, not allowing flexibility to adapt to technological advancements. He stressed that regulation needs to be able to keep pace with innovation.
– It ignored deployment context. Newsom criticised SB 1047 for failing to consider where and how AI models are used, whether in high-risk environments or for critical decision-making, arguing that this oversight made the regulation less effective.
– The potential for stifling of innovation. He also expressed concern that the bill could curtail innovation by applying stringent standards to even basic AI systems, which may inhibit the development of AI technologies that benefit the public.
– A lack of empirical evidence. Newsom insisted that any AI regulation must be based on empirical evidence and analysis of AI systems’ actual risks and capabilities. He argued that SB 1047 lacked this necessary foundation.
– Preference for broader collaboration. Instead of a California-only approach, Newsom said he favoured working with federal partners, experts, and institutions to craft a balanced and informed AI regulatory framework.
The Response
Although Newsom’s blockage of the bill may have pleased the big AI companies, not everyone was happy about it. For example, California state Senator Scott Wiener, who represents the 11th district, encompassing San Francisco and parts of San Mateo County, has strongly criticised Governor Newsom’s decision to veto Senate Bill 1047. In a press release, Mr Wiener expressed deep concern about the implications for public safety and AI regulation, arguing that that the bill was designed to introduce commonsense safeguards to protect the public from significant risks posed by advanced AI systems, such as cyberattacks, the creation of biological or chemical weapons, and other harmful applications.
He emphasised that while AI labs have made commitments to monitor and mitigate these risks, voluntary actions are not enforceable, making binding regulation crucial. Senator Wiener said, “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers,” highlighting the lack of meaningful federal regulation as a critical issue.
Wiener also dismissed the claim that SB 1047 was not based on empirical evidence, calling it “patently absurd,” given that the bill was crafted with input from leading AI experts. Mr Wiener has made it clear that he views the veto as a missed opportunity for California to lead on innovative tech regulation, similar to past actions on data privacy and net neutrality, saying, “We are all less safe as a result.”
However, despite the setback, Wiener expressed hope that the debate has advanced the issue of AI safety globally and vowed to continue working towards effective AI regulation.
Ever-Present AI?
Newsom’s vetoing of the bill comes at the same time as Microsoft’s head of AI, Mustafa Suleyman, saying that he believes that (AI) assistants with a “really good long-term memory” are just a year away in development. Suleyman’s comments refer to “ever present, persistent, very capable co-pilot companions in your everyday life”, which aligns with the idea of the view of many that to make AI truly useful / to leverage the full benefits of AI, integration is necessary. For example, an AI assistant can only organise your schedule if it has full access to your diary and remembers past interactions.
This concept of deeply integrated AI assistants actually ties directly into the debate around Senate Bill 1047, which Governor Gavin Newsom recently vetoed. The bill sought to regulate advanced AI systems, ensuring safety protocols for powerful models. As ever-present AI systems become more common, the absence of legislation like SB 1047 leaves critical questions about how these systems will be governed. Newsom’s veto reflects ongoing concerns about stifling innovation, yet it also leaves unresolved issues around privacy, security, and the unchecked expansion of AI into daily life, which these emerging technologies are set to accelerate. It can be argued, therefore, that without comprehensive safeguards, the integration of AI into personal and professional spaces may pose significant risks, e.g. data security and privacy, not to mention the risk of AI tools giving incorrect information or advice or displaying inbuilt bias towards the user they are supposed to be helping.
Six-Fingered Gloves
In a strange but related aside, a Finnish startup, Saidot, recently sent ominous six-fingered gloves to global tech leaders (including OpenAI’s Sam Altman) and EU politicians (and the UK Prime Minister) as a symbolic warning of AI dangers, particularly highlighting how image generators sometimes produce flawed outputs, like extra fingers. The gesture was aimed at raising awareness about the fast-evolving and unpredictable nature of AI, which could lead to unexpected consequences. Saidot’s CCO and co-founder, Veera Siivonen, said: “AI is developing so fast that nobody can fully anticipate its impacts and the emerging risks” and “That’s why we want to highlight both the steps that have been taken forward for safer AI, as well as some of the steps that should be taken.”
Saidot’s point aligns with the concerns surrounding the vetoed Senate Bill 1047, which sought to regulate AI technologies to prevent potential harm. As AI continues to develop rapidly, the failure to enact regulatory frameworks could leave many dangers inadequately managed.
What Does This Mean For Your Business?
The veto of Senate Bill 1047 by California State Governor Newsom highlights the need for achieving a delicate balance between promoting technological innovation and ensuring public safety. While the bill aimed to introduce necessary safeguards for advanced AI systems, its rejection shows the tension between regulation and the tech industry’s drive for unfettered progress. Newsom’s decision reflects the belief (particularly by the AI companies themselves) that overly rigid laws could stifle the rapid advancements in AI, which are viewed as essential for maintaining California’s competitive edge in the global tech landscape.
However, this move has also left a significant gap in AI governance. With AI systems becoming increasingly integrated into daily life (e.g. with the prospect of ‘ever-present’ AI as predicted by Microsoft), concerns about privacy, security, and potential misuse are mounting. The absence of comprehensive legislation leaves many of these issues unresolved, especially as the technology continues to evolve at an unprecedented pace. As argued by proponents of the bill, such as Senator Wiener, voluntary measures by AI companies may be insufficient and binding regulations are what’s really needed to protect society from potential harms, including cybersecurity risks and the creation of dangerous AI applications.
As AI continues to develop, the debate over how to effectively regulate it is far from over. The blocking of this bill may have slowed the momentum for immediate regulation, but it has also pushed the conversation forward. Looking ahead, policymakers, industry leaders, and experts will now need to collaborate on creating flexible yet effective frameworks that can both foster innovation and mitigate the risks associated with these powerful technologies.
For business users, the vetoing of Senate Bill 1047 and what would have been its wider effects means continued uncertainty around AI governance, leaving them reliant on voluntary safety measures from tech companies. While this may enable faster deployment of AI tools that enhance efficiency and innovation, for businesses it also raises risks. Without clear regulatory frameworks, businesses may face greater legal and ethical challenges, especially in areas like data security and AI accountability. For companies looking to integrate AI, the current absence of stringent safety measures could present both opportunities and risks as AI systems become more ingrained in business operations.