New DNA “Cassette Tapes” With Petabyte Potential

A research team in Shenzhen has built a working “cassette tape” that stores files as synthetic DNA on a polyester‑nylon tape, with a theoretical capacity that could dwarf today’s magnetic cartridges.

Who Built It, And What Is It?

Scientists led by Professor Xingyu Jiang at Southern University of Science and Technology (SUSTech), with collaborators including Shanghai Jiao Tong University in China, have designed a compact tape and an automated drive that writes, protects, indexes and retrieves DNA‑encoded files. Their peer‑reviewed paper in Science Advances describes a barcoded membrane tape with hundreds of thousands of addressable partitions, an on‑tape chemical process to encapsulate DNA for long life, and a drive that handles file addressing, recovery and redeposition.

In tests, the team deposited 156.6 kilobytes across four image fragments and successfully reconstructed the full image. The system achieved up to 1,570 partitions per second and created more than 545,000 addressable partitions per 1,000 metres of tape.

Why?

With global data creation expected to reach hundreds of zettabytes by the end of the decade, traditional storage technologies are starting to fall behind in terms of density, durability and efficiency. DNA, on the other hand, offers enormous potential, able to store vast amounts of information in a very small space, with the ability to remain stable for hundreds of years without electricity. This makes it particularly suited to long-term cold storage of archival data.

Not A New Idea

It should be noted here that DNA storage itself is not new. For example, Microsoft and the University of Washington demonstrated the first automated DNA write‑and‑read system in 2019. Startups such as Catalog have also explored ways to make the process faster and more affordable.

What’s So Different About This System?

What sets the SUSTech team’s work apart is the physical format and indexing system, i.e. a roll of membrane tape with barcodes acting as folders and file locations, plus a compact drive that can retrieve and rewrite specific partitions automatically. The idea is to turn DNA storage into something that functions more like a conventional tape library.

How The “Cassette” Works

The tape is actually made from a polyester‑nylon composite. Researchers print black hydrophobic bars and white hydrophilic spaces that form barcodes using a Code‑128 pattern. These barcodes are read by the device’s optical system and used to locate individual files.

To store data, synthetic DNA strands are deposited onto selected partitions and bound using DNA “handles” that have been chemically attached to the tape. A metal-organic protective coating is then applied using zinc ions and 2‑methylimidazole. This Zeolitic Imidazolate Framework (ZIF) protects the DNA but can be removed in seconds when the file is needed. The DNA is then released, amplified and sequenced.

According to the team, the system currently supports around 74.7 gigabytes of actual data per kilometre, with a theoretical capacity of over 360 petabytes per kilometre if losses are eliminated.

Capacity Is Huge, But Speed Isn’t There Yet

While the capacity potential is impressive, it seems that performance remains slow. For example, in one demonstration, it took two and a half hours to recover four image files. The researchers believe this could be reduced to 47 minutes with improved parallel processing, but that still lags far behind existing tape technologies.

For comparison, an LTO‑9 cartridge can store up to 45 terabytes of compressed data and transfer it at hundreds of megabytes per second. By contrast, current DNA tape tests managed only around 75 gigabytes per kilometre, with recovery times measured in hours. This confirms the system is still at the research stage and not ready for large-scale deployment.

Density and Shelf Life Are Key Benefits

What makes the cassette system particularly promising is its density and long shelf life. DNA can retain data for centuries when protected, and the ZIF coating developed by the team allows that protection to be switched on and off quickly.

In accelerated ageing tests, the coated tape remained readable after six weeks at 70°C in humid conditions, whereas uncoated samples failed. The barcode indexing system also allows for targeted access and replacement of individual partitions without affecting the rest of the tape.

How It Compares To Other DNA Storage Approaches

Most current DNA storage approaches use particles, microfluidic chips or benchtop sequencers to handle file storage and retrieval. By contrast, the SUSTech system aims for scalability and automation. It converts a flat surface into a rollable medium, uses a mechanical head to dip partitions into reagents, and enables physical file management using barcodes.

However, the trade‑off here is speed. DNA reading and writing still depend on slow chemical and sequencing steps. Also, synthesis costs remain high, making the technology uncompetitive with tape for frequently accessed data.

Three Key Features

All things considered, three features of this system really stand out. These are:

1. Its high addressability. In other words, the system can store lots of separate files very close together on the tape (500,000 partitions per kilometre), and it can quickly find and access any specific file (1,570 files accessed per second).

2. The ability to erase and redeposit data on a single partition, replacing over 99 per cent of previous content.

3. The rapid ZIF-based protection system, which allows for stable long-term storage and quick decapsulation when needed, i.e. the protective layer can be quickly removed to access the data.

Who Will Use It?

If developed further, the system could be valuable for deep-archive use, where data must be stored securely for long periods without frequent access. For example, sectors such as media, finance, research and government already manage large volumes of rarely accessed data and could benefit from lower storage footprints and energy usage.

However, UK organisations also face strict data governance requirements. For example, the Information Commissioner’s Office (ICO) states: “Holding personal data for too long can be as much a risk as not holding it long enough.” This means that any transition to ultra-dense media must still support data deletion, access control, and documented retention schedules.

Guidance

The ICO’s guidance is clear in this case, i.e. : “You must regularly review the data you are holding, and delete anything you no longer need.” These rules apply regardless of the storage format.

For UK businesses, that means DNA storage must still meet all the requirements of GDPR. Organisations must be able to demonstrate that personal data is held only for as long as necessary, can be retrieved when required, and can be permanently erased when it is no longer needed.

Key Numbers (With Context)

The research team behind the new system estimates a theoretical storage capacity of 362 petabytes per kilometre, which would equal around 375 petabytes on a cartridge the same size as current LTO‑9 media. To put this in context, a petabyte is about 1 million gigabytes, which is roughly equivalent to 250,000 HD movies, or 500 billion pages of standard text.

However, in practice, the system currently delivers just 74.7 gigabytes per kilometre, with slow recovery speeds. The difference between theoretical potential and real-world performance remains significant.

What Next?

Obvious next steps, therefore, include improving the input bandwidth of the system, i.e. ways to write more DNA in parallel, and integrating DNA synthesis directly onto the tape. Other teams are also reportedly working on cheaper synthesis methods and faster access technologies.

Rather than replacing tape, the cassette model from Shenzhen may, therefore, serve as a longer-term complement, packaged in a familiar form, but offering much greater density over time.

Challenges

As with all emerging storage technologies, there are some key challenges. Speed remains the biggest issue, with write and read operations still taking minutes or hours. Costs are also high, and reliable large-scale synthesis is still pretty much out of reach.

Safety and supply chain concerns will also need addressing, given the use of chemicals and lab‑grade processes. Crucially, there are some essential governance and compliance issues to take note of. As the ICO guidance makes clear, any future DNA-based system must allow for secure access, reliable deletion, and transparent oversight, regardless of how much data it can store.

What Does This Mean For Your Business?

Although this DNA cassette system is clearly not yet ready for commercial rollout, the core concept appears to demonstrate a clear step forward in how molecular storage could one day function at scale. The ability to automate writing, protect data on tape, and target individual partitions for retrieval or replacement is a notable change from lab-bound DNA experiments and towards integrated, physical storage devices.

For businesses in the UK and beyond, the long-term potential lies in highly durable, ultra-dense archives that demand little energy and minimal physical space. Sectors with growing compliance and retention requirements, such as financial services, life sciences and government records, will be watching closely. However, the cost of synthesis, the slow speed of read and write operations, and the reliance on specialised reagents and hardware mean that DNA media is still some way from practical deployment.

From a governance perspective, no format is exempt from data protection duties. The ICO’s guidance is unambiguous on that point. Any move towards alternative media, including DNA, must still support secure access, effective data minimisation, and provable erasure in line with UK GDPR. For organisations weighing up future options, the cassette format may prove useful, but only if it integrates cleanly with legal and operational frameworks already in place.

In short, for now, this is a promising development with strong technical foundations and clear use case potential in cold storage. What happens next will depend on how quickly the underlying processes can be refined and how well the technology can be aligned with real-world business needs.

Company Check : OpenAI Wins Microsoft’s Backing To Become A Public Benefit Corporation

OpenAI has secured Microsoft’s support to convert its for-profit arm into a Public Benefit Corporation, with the nonprofit parent retaining control and taking a stake worth more than $100 billion.

What Has Actually Happened?

OpenAI and Microsoft say they’ve signed a non-binding memorandum of understanding (MOU) for “the next phase” of their partnership. The agreement essentially lays the groundwork for OpenAI to recapitalise its for-profit division as a Delaware-based Public Benefit Corporation (PBC), with final terms still subject to approval by regulators in California and Delaware.

Finalising

In a joint statement, the two companies confirmed they are “actively working to finalise contractual terms in a definitive agreement”, while reaffirming a shared focus on “delivering the best AI tools for everyone, grounded in our shared commitment to safety.”

OpenAI board chair Bret Taylor explained the nonprofit would continue to control the organisation and, under the new structure, receive a direct equity stake in the PBC “that would exceed $100 billion.” If approved, this would make the OpenAI nonprofit one of the most well-resourced philanthropic entities in the world.

Why Move To A Public Benefit Corporation?

Public Benefit Corporations are a legal category of for-profit companies in Delaware that are required to pursue a stated public mission and balance it against shareholder returns. For example, for OpenAI, the appeal may lie in it being able to raise capital from traditional investors without abandoning the public-interest commitments baked into its original nonprofit charter.

OpenAI says the recapitalisation will allow it to raise the funds needed to advance its mission (developing artificial general intelligence, known as AGI, that benefits humanity) while also increasing the nonprofit’s financial capacity to support community-focused initiatives.

For example, OpenAI has already launched a $50 million grant fund aimed at boosting AI literacy, local innovation and economic opportunity, and says the new structure will unlock the ability to do much more.

How This Differs From Altman’s Earlier Model

Looking back to when Altman took over as CEO, OpenAI created a “capped-profit” structure in 2019. That model allowed for external investment but placed a ceiling on returns, with any surplus redirected to the nonprofit. It was a novel attempt to balance innovation and social purpose, but one that has since come under strain as the company’s ambitions (and valuations) have soared.

This new move effectively replaces the capped-profit structure with a standard equity-based PBC model. In other words, while the nonprofit retains control and receives a substantial stake, it no longer sets a formal cap on investor returns. This should allow increased flexibility while maintaining a clear mission through governance.

Another View

That all appears fine, however a more cynical observer might see OpenAI’s move as a calculated step to unlock vast commercial gains while preserving a veneer of altruism. For example, by converting its for‑profit arm into a Public Benefit Corporation, OpenAI can raise unlimited investment, offer conventional equity, and potentially go public, free from the constraints of its earlier capped‑profit model. The nonprofit’s newly announced $100 billion stake, while impressive, could be viewed by some as a way to reframe a profit‑driven shift as a philanthropic victory.

Critics might also argue the change allows OpenAI to loosen its reliance on Microsoft without damaging the partnership, giving it room to grow across multiple cloud providers. At the same time, positioning the nonprofit as a powerful oversight body may help deflect regulatory scrutiny and safety concerns. In this light, the transition could be seen not so much as a governance breakthrough, but more as a reputational strategy, designed to balance investor demands, scale ambitions and public trust without truly relinquishing control.

Why Microsoft Is Supporting It

Microsoft’s strategic interest is pretty obvious. It has invested billions in OpenAI since 2019 and integrated its models across Azure, Microsoft 365 and Copilot. At the same time, OpenAI’s growing infrastructure needs and commercial scale are pushing it beyond what a single partner, or a limited-profit structure, can support.

Backing the transition to a PBC lets Microsoft preserve its commercial relationship with OpenAI while accommodating the startup’s need for expansion. Under existing agreements, Microsoft still gets preferred access to OpenAI’s technology and remains the primary cloud provider. But that relationship is no longer exclusive.

For example, earlier this year, OpenAI reportedly agreed to spend $300 billion with Oracle Cloud over five years from 2027. It also signed a deal with SoftBank for its Stargate data centre project, making it clear that future growth depends on a multi-cloud strategy.

Capital, Scale And Market Power

The most striking figure in the announcement is the $100 billion+ valuation attached to the nonprofit’s stake. While not independently verified, the figure implies a significant step-up in OpenAI’s market value and sends a clear message to future investors that the new entity will be structured for major fundraising, possibly including an IPO.

For businesses, the shift could mean faster product rollouts, greater scale, and more robust service levels. A clearer corporate structure may also make OpenAI a more straightforward partner for UK organisations, particularly in the public sector or regulated industries where governance and transparency matter.

What It Means For Microsoft

With this deal, Microsoft retains strong commercial and technical ties but avoids being dragged into future boardroom drama. The events of late 2023, when OpenAI’s nonprofit board briefly removed and then reinstated Altman, highlighted the governance tensions in the previous setup.

By endorsing the PBC model, therefore, Microsoft helps stabilise the structure without needing to formally take control. It still benefits from deep integration of OpenAI models into its ecosystem, including for UK enterprise clients using Azure AI.

For example, Microsoft has already embedded OpenAI models into key products like Teams, Word and Excel. The new structure ensures those integrations can continue at pace while OpenAI pursues its wider commercial ambitions.

Competitors

OpenAI’s rivals are unlikely to stay quiet about this deal. For example, Anthropic, backed by Amazon and Google, has operated as a Public Benefit Corporation since 2023 and has positioned itself as the safety-first alternative. Meta continues to champion open models, and Elon Musk’s xAI has made governance part of its pitch.

With OpenAI now adopting a more standard commercial structure, scrutiny will likely turn to how it balances mission and profit in practice. Advocacy groups such as Encode and The Midas Project have already raised concerns that the transition risks diluting the company’s nonprofit ideals.

Critics argue that such a large financial shift, paired with increased commercial freedom, could incentivise risky behaviours unless safeguards are put in place.

Challenges, Risks And Legal Hurdles

While the MOU with Microsoft clears a major obstacle, the transition still depends on regulatory approval. OpenAI says it is working with the Attorneys General of California and Delaware to ensure compliance with corporate and charitable law. That process could take months and may come with conditions or oversight requirements.

OpenAI has also faced legal pressure from former partners. For example, Elon Musk’s lawsuit against the company argues that it has strayed from its original nonprofit mission, with Microsoft’s influence and recent capital plans cited as evidence.

Also, former employees and AI researchers have warned that governance alone may not be enough to ensure safety, particularly if OpenAI moves faster than regulators can respond. For example, OpenAI’s governance review committee, created in 2024 after the board crisis, has yet to publish its findings. How this ties into the new PBC structure remains unclear.

What Businesses Should Watch

For UK companies, several things are likely to really matter here. For example, the recapitalisation could accelerate development of enterprise tools built on OpenAI’s foundation models. Also, the nonprofit’s philanthropic arm may expand its UK grant funding, particularly in education and local innovation. Businesses will also want clarity on how OpenAI’s safety commitments are maintained and enforced under the new structure.

Most of all, OpenAI’s move suggests that the AI industry may be entering a new phase, i.e. one where growth, governance and public interest must be balanced at unprecedented scale. This transition may not be the final step, but it could help determine how AI leadership evolves and what role UK organisations are able to play.

What Does This Mean For Your Business?

OpenAI’s decision to push ahead with this structural change appears to be aimed at unlocking new sources of investment and long-term commercial growth while maintaining public trust in its mission. The Public Benefit Corporation model offers a legal route to do both at once, though whether that balance holds in practice will depend on how governance is applied and enforced. The nonprofit’s stake, reportedly worth more than $100 billion, is designed to show that public interest will still have a seat at the table, but that does not guarantee influence if financial pressures take priority.

For Microsoft, the arrangement allows it to protect its existing technical integrations and product roadmap without getting dragged into OpenAI’s internal politics. It also avoids the risk of losing access if OpenAI were to move away from Azure entirely. At the same time, it now must accept that OpenAI will work with other cloud providers, potentially reducing Microsoft’s control over where and how OpenAI’s most advanced models are deployed.

For UK businesses, the practical outcome could be positive in the short term. More capital and more scale should mean faster rollouts, better support, and a more stable supply of cutting-edge models across a growing number of platforms. For regulated industries and public bodies, a clearer governance framework may help address procurement concerns about transparency and safety. However, businesses will need to stay alert to how these commitments are upheld and what contractual guarantees actually make it into commercial terms.

Critics may argue that this is less about safeguarding humanity and more about enabling a lucrative move to public markets. That may or may not prove true, but for now the deal reflects the reality that AI development is already operating on a scale where traditional structures and funding models no longer apply. The coming months will reveal whether OpenAI can keep its mission and its commercial future aligned, and how closely regulators and investors are prepared to hold it to that promise.

Security Stop-Press: Red Sea Cable Cuts Disrupt Microsoft Cloud Traffic

Microsoft has warned of slower Azure cloud services after key undersea internet cables in the Red Sea were cut, forcing traffic to be rerouted and causing delays across parts of Asia and the Middle East.

The damage, near Jeddah in Saudi Arabia, affected the SMW4 and IMEWE cable systems, which are two major routes linking Europe and Asia. Microsoft said traffic outside the Middle East is unaffected, but users in India, Pakistan and the UAE are facing slower speeds, especially at peak times.

The cause remains unclear, though regional tensions have fuelled concerns about sabotage. For example, in February, Yemen’s Houthi rebels were accused of planning attacks on similar infrastructure, though they denied involvement.

Microsoft has now diverted traffic through alternate paths, but repairing the damage may take weeks and cost up to £1 million per incident. Limited repair crews and the strategic importance of the Red Sea make the region especially vulnerable.

To reduce risk, businesses should review their cloud provider’s resilience, diversify service routes where possible, and ensure key systems can tolerate temporary delays.

Sustainability-In-Tech : New Synthetic Graphite Boost EV Battery Lifespan by 30%

ExxonMobil has unveiled a new form of synthetic graphite designed to extend electric vehicle battery life by up to 30 per cent, in a move that could reshape the EV materials supply chain.

A Major Energy Player With a New Direction

ExxonMobil is best known as one of the world’s largest oil and gas companies, with operations spanning upstream exploration, refining, petrochemicals and energy logistics. However, in recent years, the company has increasingly turned its attention to low-carbon technologies, focusing on areas where it believes it holds a competitive advantage, such as carbon capture, hydrogen, and chemical-based solutions.

While it has often avoided wind and solar projects, citing a lack of in-house capability, ExxonMobil has consistently invested in R&D in the materials space. This latest development, presented by CEO Darren Woods at the University of Texas at Austin’s Energy Symposium, represents a significant step into the EV battery supply chain.

What Is This New Graphite, and Why Does It Matter?

The material is a newly engineered synthetic form of graphite, used in the anode of lithium-ion batteries, that the company claims can extend battery lifespan, improve charging speeds, and increase vehicle range.

“The carbon molecule structures we’ve developed show real promise for faster charging and longer-lasting batteries,” said Woods during the announcement. “This is a revolutionary step change in battery performance.”

Synthetic graphite is already a critical ingredient in EV batteries, accounting for more than 90 per cent of commercial anode material. However, existing production methods are energy-intensive, supply chains are stretched, and natural graphite sourcing is geographically constrained, with over 60 per cent of global supply currently coming from China.

ExxonMobil says its new form of graphite is designed for consistency and high performance, and can be manufactured using carbon-rich feedstocks derived from existing refining processes. This means the company can use its current infrastructure to produce the material at scale, reducing reliance on mining operations and imported feedstocks.

From Oil Barrels to Battery Materials

The move into battery materials may seem like a departure from ExxonMobil’s traditional focus, but the company actually has quite a long-standing history in the battery space. For example, it co-invented the lithium-ion battery in the 1970s and developed the plastic separator films used in early rechargeable versions.

Now, with the acquisition of Superior Graphite’s US production assets and technology, ExxonMobil is laying the groundwork for a large-scale synthetic graphite business. According to the company’s blog, the acquisition will allow it to build a “robust, American-based supply chain” for synthetic graphite.

“We’re expanding into the advanced synthetic graphite business, and we’re doing it with a name that’s been in the game for over a century,” said the company in a September statement.

Who Could Use This Graphite, and Why Now?

The synthetic graphite is being trialled by multiple unnamed EV manufacturers, although details remain under wraps at present. Industry analysts say it could be especially valuable for high-performance EVs, commercial electric fleets, and energy storage systems (BESS) that require longer cycle lives and more stable charging patterns.

By 2030, demand for battery-grade graphite is projected to exceed 4 million metric tonnes annually (Benchmark Mineral Intelligence). With growing concerns about China’s dominance in graphite processing, Western governments and manufacturers are actively seeking alternative, scalable sources.

For EV makers, better anode materials could reduce the cost per kilowatt-hour of batteries, improve durability, and reduce consumer anxiety around battery degradation.

For consumers, the promise of longer-lasting, faster-charging batteries could mean fewer replacements, longer warranties, and better range per charge, which are all critical factors in encouraging wider EV adoption.

The Implications for ExxonMobil and Its Competitors

Although ExxonMobil has stated it does not intend to become a battery maker, the strategic move into anode materials positions it as a key supplier to one of the fastest-growing industries in the world. The company has said it expects to start commercial production of the graphite by 2029.

This puts ExxonMobil in direct competition with a range of players including Chinese graphite suppliers, Korean battery component firms, and materials companies like SGL Carbon and Syrah Resources. While some rivals focus on natural graphite mined in Africa or South America, ExxonMobil’s emphasis on synthetic production could appeal to buyers looking for stable, traceable, and lower-emissions supply chains.

It could also provide the company with a new source of revenue as demand for petrol and diesel continues to decline in line with electrification targets across Europe, the UK, and North America.

“This isn’t a step in; it’s a full-scale launch with power and purpose,” the company said. “When our product enters the market, we expect it will deliver faster charging and longer life than existing graphite materials today.”

Sustainability Claims Under Scrutiny

ExxonMobil argues that synthetic graphite offers significant sustainability benefits compared to traditional mining. For example, its internal estimates suggest the process could be less energy-intensive, more land-efficient, and have higher throughput than natural alternatives.

However, the environmental impact of producing synthetic graphite at scale remains a subject of debate. Critics point to the use of fossil-based feedstocks, the carbon footprint of high-temperature furnaces, and the lack of independent life cycle analysis to support the company’s claims.

Some experts have welcomed the technical breakthrough but say the environmental claims still need to be independently verified. While synthetic graphite can offer improved purity and performance compared to natural sources, producing it typically involves energy-intensive processes and high-temperature furnaces. Without a full lifecycle assessment, it’s unclear whether ExxonMobil’s version offers a lower carbon footprint overall.

Some environmental groups have also expressed concern that the announcement could serve as a reputational tool, allowing the company to appear aligned with energy transition goals while continuing high levels of oil and gas production. ExxonMobil has faced ongoing criticism over its lobbying record and past delays in embracing renewable energy.

Barriers and Uncertainties Ahead

Despite the positive headlines, several hurdles remain. For example, the synthetic graphite market is highly competitive, and pricing pressure from natural sources remains a factor. Regulatory alignment, especially for battery materials used in vehicles sold in the EU and UK, may require third-party certification and data disclosure.

ExxonMobil also acknowledged risks around market timing and tax incentives. In a recent comment about its hydrogen and ammonia plans, Woods warned that changing government policy could create uncertainty for long-term investment.

“We can’t do it on charity,” he said, referring to the limited duration of US tax credits under recent legislation.

Even so, the company appears to be betting that its scale, technical experience, and control of the supply chain will allow it to succeed where others have struggled.

What Does This Mean For Your Organisation?

What happens next depends on how effectively ExxonMobil can scale up production and prove the performance gains it is promising. If the material lives up to expectations, it could give battery manufacturers and vehicle makers access to a more stable, domestic supply of high-performance anode material, especially in markets looking to reduce dependence on China. That includes the UK, where securing critical minerals and battery components has become a growing concern for both government and industry. A reliable source of synthetic graphite with lower volatility and consistent quality could support EV production, battery research, and even domestic energy storage projects.

For UK firms involved in automotive manufacturing, advanced materials, or clean energy systems, this may open up opportunities for new partnerships or supply arrangements, particularly if ExxonMobil’s product proves compatible with emerging battery chemistries. At the same time, UK businesses developing their own alternatives will likely face growing competition from larger, vertically integrated players able to produce materials at scale and integrate them into existing logistics and refining networks.

ExxonMobil’s move appears to signal that legacy energy companies are looking for viable routes into clean tech supply chains without abandoning their core expertise. Whether this is seen as genuine innovation or simply an extension of fossil-based operations will depend on the transparency of the data that follows. If the environmental claims can be substantiated and the product delivers on cost and performance, it may set a new standard for what synthetic graphite can do. If not, the gap between energy transition rhetoric and reality may widen even further.

Either way, the development adds some momentum to an increasingly strategic part of the EV supply chain. For governments, manufacturers, and consumers alike, a more competitive graphite market could bring welcome improvements in performance, pricing, and resilience. However, it will also bring new questions about sustainability, transparency, and where the true value in the battery industry really lies.

Video Update : Using ChatGPT Study Model

ChatGPT provides an extremely powerful way to study, get an interactive ‘mentor’ and improve your learning abilities in many other ways too. By leveraging the ‘Study Mode’, you can accelerate your training to the next level and best of all, it’s so easy to use.

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Tech Tip – Recover Accidentally Closed Tabs

We’ve all done it — you’re working away, close a browser tab by mistake, and instantly regret it.

Good news: most browsers let you reopen it in seconds. On Windows, just press Ctrl + Shift + T; on a Mac, use Cmd + Shift + T. Your last closed tab will reappear, and you can repeat the shortcut to bring back several tabs.

Many browsers also keep a full history, so you can find a site later if needed. This simple trick avoids frustration and keeps you moving.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives