Tech Tip – How To Quickly Check If Any Of Yours Passwords Have Been Leaked
Google’s updated Password Manager in Chrome allows you check if any of your passwords have been compromised or involved in a data breach. Here’s how to use it:
– In Chrome, click on the three dots (top right).
– Select ‘Passwords’ (in iOS) or ‘Google Password Manager’ in desktop browser.
– Click on ‘Check Now’.
– Google will list which (if any) passwords have been compromised and need to be changed.
Tech Insight : What Are ‘Deepfake Testers’?
Here we look at what deepfake videos are, why they’re made, how they (might) be quickly and easily detected using a variety of deepfake detection and testing tools.
What Are Deepfake Videos?
Deepfake videos are a kind of synthetic media created using deep learning and artificial intelligence techniques. Making a deepfake video involves manipulating or superimposing existing images, videos, or audio onto other people or objects to create highly realistic (but typically fake) content. The term “deepfake” comes from the combination of “deep learning” and “fake.”
Why Make Deepfake Videos?
People create deepfake videos for various reasons, driven by both benign and malicious intentions. Here are some of the main motivations behind the creation of deepfake videos:
– Entertainment and art. Deepfakes can be used as a form of artistic expression or for entertainment purposes. AI may be used, for example, to create humorous videos, mimic famous movie scenes with different actors, or explore creative possibilities.
– Special effects and visual media. In the film and visual effects industry, deepfake technology is often used to achieve realistic visual effects, such as de-aging actors or bringing deceased actors back to the screen (a contentious point at the moment, give the actors strike over AI fears). That said, some sportspeople, actors and celebrities have embraced the technology and are allowing their deepfake identities to be used by companies. Examples include Lionel Messi by Lay’s crisps and Singapore celebrity Jamie Yeo agreeing a deal with financial technology firm Hugosave.
– Education and research. Deepfakes can be used for research and educational purposes, helping researchers, academics, and institutions study and understand the capabilities and limitations of AI technology.
– Memes and internet culture. In recent times, deepfakes have become part of internet culture and meme communities, where users create and share entertaining or humorous content featuring manipulated faces and voices.
– Face swapping and avatar creation. Some people use deepfakes to swap faces in videos, such as putting their face on a character in a movie or video game or creating avatars for online platforms.
– Satire and social commentary. Deepfake videos are also made to satirise public figures or politicians, creating humorous or critical content to comment on current events and societal issues.
– Privacy and anonymity. In some cases, people may use deepfakes to protect their privacy or identity by concealing their face and voice in videos.
– Spreading misinformation and disinformation. Unfortunately, deepfake technology has been misused to spread misinformation, fake news, and malicious content. Deepfakes can be used to create convincing videos of individuals saying or doing things they never did, including political figures, leading to potential harm, defamation, and the spread of falsehoods.
– Fraud and scams. This is a very worrying area as criminals can now use deepfakes for fraudulent activities, e.g. impersonating someone in video calls to deceive or extort others. For example, deepfake testing company Deepware says: ”We expect destructive use of deepfakes, particularly as phishing attacks, to materialise very soon”.
What Are Deepfake Testers?
With AI deepfakes becoming more convincing and easier to produce thanks to rapidly advancing AI developments and many good AI video, image, and voice services widely available online (many for free), tools that can quickly detect deepfakes have become important. In short, deepfake testers are online tools that can be used to scan any suspicious video to discover if it’s synthetically manipulated. In the case of deepfakes made to spread misinformation and disinformation or for fraud and scams, these can be particularly valuable tools.
How Do They Work?
For the user, deepfake testers typically involve copying and pasting the URL of a suspected deepfake into the online deepfake testing tool and hitting the ‘scan’ button to get a quick opinion about whether it’s likely to be a deepfake video.
Behind the scenes there a number of technologies used by deepfake testers, such as:
– Photoplethysmography (PPG), for detecting changes in blood flow, because deepfake faces don’t give out these signals. This type of detection is more difficult if the deepfake video is pixelated.
– Eye movement analysis. This is because deepfake eyes tend to be divergent, i.e. they don’t look at a central point like real human eyes do.
– Lip Sync Analysis can help highlight a lack of audio and visual synchronisation, something which is a feature of deepfakes.
– Facial landmark detection and tracking algorithms to assess whether the facial movements and expressions align realistically with the audio and overall context of the video.
– Testing for visual irregularities, e.g. unnatural facial movements, inconsistent lighting, or strange artifacts around the face.
Examples
Some examples of deepfake testers include:
This is only for detecting AI-generated face manipulations and can be used via the website, API key, or in an offline environment via SDK. There is also an Android app. There is a maximum limit of 10 minutes for each video.
With a reported 96 accuracy rate, Intel’s deepfake detection platform, introduced last year, was billed as “the world’s first real-time deepfake detector that returns results in milliseconds.” Using Intel hardware and software, it runs on a server and interfaces through a web-based platform.
Microsoft’s Video Authenticator Tool
Announced 3 years ago, this deepfake detecting tool uses advanced AI algorithms to detect signs of manipulation in media and provides users with a real-time confidence score. This tool was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both of which are key models for training and testing deepfake detection technologies.
This AI-based protection platform is used by governments, defence agencies, and enterprises. Users upload their digital media through the website or API, whereupon Sentinel uses advanced AI algorithms to automatically analyse the media. Users are given a a detailed report of its findings.
This is an open platform allowing users to upload a video (.wav video, maximum size is 50MB), input their email address, and get an assessment of whether a video is fake.
DuckDuckGoose DeepDetector Software
This is fully automatised deepfake detection software for videos and images which uses explainable output powered by AI to help users understand how the detection was made. It detects deepfake videos and images in real-time and provides explainable outputs
This project aims to develop intelligent human-in-the-loop content verification and disinformation analysis methods and tools whereby social media and web content is analysed and contextualised within the broader online ecosystem. The project offers, for example, a chatbot to guide users through the verification process, an open-source browser plugin, and other open source AI tools, as well as proprietary tools owned by the consortium partners.
What Does This Mean For Your Business?
Deepfake videos can be fun and satirical, however there are real concerns that with AI advancements, deepfake videos are being made for spreading misinformation and disinformation. Furthermore, fraud and scam deepfakes can be incredibly convincing and, therefore dangerous.
Political interference such as spreading videos of world leaders and politicians saying things they didn’t say, plus using videos to impersonate someone to deceive or extort are now very real problems. With it being so difficult to tell for sure just by watching a video whether it’s fake or not, these deepfake testing tools can have a real value both as a safety measure for businesses, or for anyone who needs a fast way to check out their suspicions.
Deepfake testers, therefore, can contribute to cybercrime prevention and countering of fake news. The issue of deepfakes as a threat is only going to grow, so the hope is that as deepfake videos become ever-more sophisticated, the detection tools are able to keep up in their ability to be able to tell with certainty whether a video is fake.
Featured Article : UK Gov Pushing To Spy On WhatsApp (& Others)
The recent amendment to the Online Safety Bill which means a compulsory report must be written for Ofcom by a “skilled person” before encrypted app companies are forced to scan messages has led to even more criticism of this rather controversial bill to bypass security in apps and give the government (and therefore any number of people) more access to sensitive and personal information.
What Amendment?
In the House of Lords debate, which was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended before the Bill becomes law, Government minister Lord Parkinson amended the bill by calling for the need for a report to be written for Ofcom by a “skilled person” (appointed by Ofcom) before powers can be used to force a provider / tech company (e.g. WhatsApp or Signal), to scan its messages. The stated purpose of scanning messages using the powers of the Online Safety Bill is (ostensibly) to uncover child abuse images.
The amendment states that “OFCOM may give a notice under section 111(1) to a provider only after obtaining a report from a skilled person appointed by OFCOM under section 94(3).”
Prior to the amendment, the report had been optional.
Why Is A Compulsory Report Stage So Important?
The amendment says that the report is needed before companies can be forced to scan messages “to assist OFCOM in deciding whether to give a notice…. and to advise about the requirements that might be imposed by such a notice if it were to be given”. In other words, the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies could be used instead.
It is understood, therefore, that the report’s findings will be used to help decide whether to force a tech firm to scan messages. Under the detail of the amendment, a summary of the report’s findings must be shared with the tech firm concerned.
Reaction
Tech companies may be broadly in agreement of the aims of the bill. However, the detail of the bill that companies such as encrypted messages operators (e.g. WhatsApp and Signal and others) have always opposed being forced into scanning user messages before they are encrypted (client-side scanning). Operators say that this completely undermines the privacy and security of encrypted messaging, and they object to the idea of having to run government-mandated scanning services on their devices. Also, they argue that this could leave their apps more vulnerable to attack.
The latest amendment, therefore, has not changed this situation for the tech companies and has led to more criticism and more objections. Many objections have also been aired by campaign and rights groups such as Index on Censorship and The Open Rights Group, who have always opposed what they call the “spy clause” in the bill for example:
– The Ofcom appointed “skilled person” could simply be a consultant or political appointee, and having these people oversee decisions about free speech and privacy rights would not amount to effective oversight.
– Judicial oversight should be a bare minimum and a report written by just a “skilled person” wouldn’t be binding and would lack legal authority.
Other groups, however, such as the NSPCC, have broadly backed the bill in terms of finding ways to make tech firms mitigate the risks of child sexual abuse when designing their apps or adding features, e.g. end-to-end encryption.
Another Amendment
Another House of Lords amendment to the bill requires Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.
This has also been met with similar criticisms over the idea of government-mandated scanning technology’s effects on privacy, freedom of speech, and potentially being used as a kind of monitoring and surveillance. WhatsApp, Signal, and Apple have all opposed the scanning idea, with WhatsApp and Signal reportedly indicating that they would not comply.
Breach Of International Law?
The clause 9(2) of the Online Safety Bill which requires platforms to prevent users from “encountering” certain “illegal content” has also been soundly criticised recently. This clause means that platforms which host user-generated content will need to immediately remove any such content, which has a broad range, or face considerable fines, blocked services, or even jail for executives. Quite apart from the technical and practical challenges of being able to achieve this effectively at scale, criticisms of the clause include that it threatens free speech in the UK, and it lacks the detail for legislation.
Advice provided The Open Rights Group suggests that the clause may even be a breach of international law in that there could be “interference with freedom of expression that is unforeseeable” and goes against the current legal order on platforms.
It’s also been reported that Wikipedia could withdraw from the UK over the rules in the bill.
Investigatory Powers Act Objections (The Snooper’s Charter)
Suggested new updates to the Investigatory Powers Act (IPA) 2016 (sometimes called the ‘Snooper’s Charter’) have also come under attack from tech firms, not least Apple. For example, the government wants messaging services, e.g. WhatsApp, to clear security features with the Home Office before releasing them to customers. The update to the IPA would mean that the UK’s Home Office could demand, with immediate effect, that security features are disabled, without telling the users/the public. Currently, a review process with independent oversight (with the option of appeal by the tech company) is needed before any such action could happen.
The Response
The response from tech companies has been and swift and negative, with Apple threatening to remove FaceTime and iMessage from the UK if the planned update to the Act goes ahead.
Concerns about granting the government the power to secretly remove security features from messaging app services include:
– It could allow government surveillance of users’ devices by default.
– It could reduce security for users, seriously affect their privacy and freedom of speech, and could be exploited by adversaries, whether they are criminal or political.
– Building backdoors into encrypted apps essentially means there is no longer end-to-end encryption.
Apple
Apple’s specific response to the proposed updates/amendments (which will be subject to an eight-week consultation anyway) is that:
– It refuses to make changes to security features specifically for one country that would weaken a product for all users globally.
– Some of the changes would require issuing a software update, which users would have to be told about, thereby stopping changes from being made secretly.
– The proposed amendments threaten security and information privacy and would affect people outside the UK.
What Does This Mean For Your Business?
There’s broad agreement about the aims of UK’s Online Safety Bill and IPA in terms of wanting to tackle child abuse, keep people safe, and even making tech companies take more responsibility and measures to improve safety. However, these are global tech companies where UK users represent only a small part of their total user base, and ideas like building in back doors into secure apps, running government approved scanning of user content and using reports written by consultants/political appointees to support scanning all go against ideas of privacy, one of key features of apps like WhatsApp.
Allowing governments access into apps and granting them powers to turn off security ‘as and when’ raise issues and suspicions about free speech, government monitoring and surveillance, legal difficulties, and more. In short, even though the UK government want to press ahead with the new laws and amendments there is still a long way to go before there is any real agreement with the tech companies. In fact, it looks likely that they won’t comply and some, like WhatsApp have simply said they’ll pull out of the UK market, which could be very troublesome for UK businesses, charities, groups and individuals.
The tech companies also have a point in that it seems unreasonable to expect them to alter their services just for one country in a way that could negatively affect their users in other countries. As some critics have pointed out, if the UK wants to be a leading player on the global tech stage, alienating the big tech companies may not be the best way to go about it. It seems that a lot more talking and time will be needed to get anywhere near real-world workable laws and, at the moment, with the UK government being seen by many as straying into areas that are alarming rights groups, some tech companies are suggesting the government ditch their new laws and start again.
Expect continued strong resistance from tech companies going forward if the UK government doesn’t slow down or re-think many aspects of these new laws – watch this space.
Tech News : $900,000 For Netflix AI Product Manager
Leading Streamer Netflix has added more fuel to the fire in the actors and writers’ union strike by advertising for an AI product manager with a salary up to $900,000.
Could Have Paid For Actors
With US actors and writers who are members of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) having been on strike for weeks, partly to protect their jobs from AI, the mega-salary job ad has been met with some criticism. For example, SAG-AFTRA has argued that 87 per cent of its membership only earn less than $26,000 a year, and some union members have added that the eye-watering $300,000 to $900,000 dollars for one AI job could have paid and supported 35 actors and their families instead.
The Reasons For The Strike
Some of the main reasons for the dual strike of actors and writers include:
– Worries that background performers could be scanned and paid one day’s pay, while their scanned image is then owned by film companies who can use the person’s image or likeness (reproduced with AI) for unlimited projects in the future without the performer’s consent and without compensation.
– The need for wage increases.
– Streaming service like Prime, Netflix, and Disney not paying enough in “residuals” (the royalties earned from repeat broadcasts of films or TV shows).
– Concerns that AI will be used to write first drafts of scripts and screenwriters will only be hired at a lower rate to bring them up to scratch.
The Job
The $300,000 to $900,000 salary Machine Learning Platform Manager Netflix job is in their Los Gatos office or can be remotely based on the West Coast. The advert specifies that the right candidate will be someone with “a technical background in engineering and/or machine learning”. Netflix says that “Machine Learning/Artificial Intelligence is powering innovation” with the “Machine Learning Platform (MLP)” providing “the foundation for all of this innovation” with the new Product Management role being used to “increase the leverage of our Machine Learning Platform”.
What Does This Mean For Your Business?
With actors (and writers) out on strike to essentially stop AI from replacing them and to get more residuals from streaming services. Netflix posting a high salary AI job offer, therefore, is felt by many actors to be a double kick in the teeth.
The use of AI in film and TV is already here but, at a time when the union needs to negotiate its terms between actors and the studios, the studios are focusing on how they plan to expand their use of AI. Understandably, the fears of actors about AI have been brought to a head. For example, with writer, actor and comedian Rob Delaney describing the AI job advert as “ghoulish” and actor Brian Cox being widely quoted as saying “the worst aspect is the whole idea of AI and what AI can do to us”, it’s clear that there’s still some way to go before any kind of agreement can be reached between the studios and the unions and the major disruption to the industry can be stopped.
Tech News : $50,000 Apple Trainers … Made of Gold?
Renowned auction house and brokers of art, collectibles, jewellery, and real estate Sotheby’s is selling a super-rare pair of Apple-branded trainers for a guide price of $50,000 (circa £39K).
A One-Time Giveaway In The 90s
Sotheby’s says the Omega Sports Apple Computer Sneakers (Size 10.5 / 9.5 UK), boxed and with an alternative pair of red laces, were custom-made for Apple employees and were “a one-time giveaway at a National Sales Conference in the mid-’90s”. The trainers are white with the Apple logo on the side and an air-cushioned sole.
Explaining why they are so rare (hence the huge price tag), Sotheby’s says on its website: “Having never reached the general public, this particular pair of sneakers is one of the most obscure in existence and highly coveted on the resale market”.
Thousands Purchased Clothing From Apple In The 80s
What many people may not know is that more than 22,000 Apple customers purchased clothing and accessories from the brand in 1985 which were part of a short-lived ‘Apple Collection’. For products outside of Apple’s zone of expertise, the company partnered with leading brands (e.g. Honda and Braun) to apply the Apple branding to a variety of white label products.
Previous Pair
A previous pair of Apple branded trainers, believed to have been prototypes and found in 2016 at garage sale in Palo Alto, California were put up for auction with a starting bid of $30,000.
The Rare Trainers Market
There is a booming collectables market for rare trainers with cultural significance and scarcity being the factors that boost prices. Examples of trainers fetching particularly high prices include:
– In 2021, a Nike Air Yeezy 1 prototype, worn by (rapper) Kanye West on stage at the 2008 Grammy awards were sold for a staggering $1.8 million!
– Michael Jordan’s 1998 NBA Finals Air Jordan 13 trainers sold at auction for $2.2 million. Also, a 1985 pre-production sample pair of Nike’s first shoe for former NBA basketball star Michael Jordan fetched $560,000 at auction in 2020.
Why?
The market for rare trainers as investments and collectables is thought to have come about because, as generations from the 80s and 90s have become successful, trainers have become the cultural assets that they remember and value.
What Does This Mean For Your Business?
Apple was one of the earliest pioneers of a tech industry and from very humble beginnings has grown to become a $2.82 trillion company and one of the most recognisable brands, and traditionally has its own fan-like user base. As such, any of its early products have become rare and valuable commodities with a cultural significance. This value extends to its non-tech related branded products such as these trainers and the fact that that it’s unlikely that any other pairs have survived gives them a rarity boost in price, even though they were originally a free giveaway.
Scarcity and cultural significance are huge value-boosters in the rare trainer market which is part of an expanding range of investments for predominantly younger investors who value collectable products like trainers and sports cards which may only be a few decades old. In the digital world for example, newer investments have widened to include non-fungible tokens (NFTs) and, no doubt, other markets will develop in the near future for other products as wealthier people from advancing generations seek out collections and investments that have cultural significance and memories for them from relatively recent times.
Sustainability-in-Tech : Buildings Made Of Re-Usable Parts
Dutch architecture firm MVRDV and startup Madaster have taken sustainable design to new levels by creating a whole office building that’s made of 90 per cent re-usable parts.
Matrix One – Made of Re-Usable Components
The ‘Matrix One’ six-storey, 13,000-square-metre laboratory and office building acts as the main campus hub of the Matrix Innovation Center in Amsterdam and has been designed to be demountable (it can be easily dismantled). The is because the innovative building is made up of 120,000 reusable components.
The building has been constructed using simple connections like screws and bolts so that, when parts of the building are updated, components of the building can be detached and reused. For example, even the building’s floors have been made using prefabricated concrete slabs with no fixed connections, allowing them to be reused at the end of the building’s lifespan.
Other Sustainable Elements
Other sustainable design elements of the building include the large “social stairs” which people are encouraged to use rather than the lifts, solar energy generation from 1,000 square metres of solar panels on the roof, smart lighting and smart heating to reduce energy consumption, and ample bicycle parking.
Flexibility
Also, the fact that the offices in the building can easily be modified to become labs and vice versa, and labs can be easily upgraded with new systems to accommodate changing standards gives the building a flexibility that others don’t have.
MVRDV partner Frans de Witte says: “The building is state-of-the-art now, but it also acknowledges that the state-of-the-art is constantly changing. So, we made both the interior spaces and the technical installations that serve them as flexible as possible” and that “In the decades to come when the building is no longer cutting-edge, it will become a source to harvest materials from for other buildings.” For example, the building has been designed so that when it reaches the end of its useful life or gets renovated, its components can be made available for purchase on a second-hand marketplace, a kind of eBay for buildings.
Madaster’s Material Passport System
The Madaster platform, an online library of information on materials and products, provides a comprehensive material passport system, thereby giving insight into the materials and products used and the CO2 storage for over 120,000 individual components. As a result, over 90 per cent of the building’s materials can be reused again later.
What Does This Mean For Your Business?
This is an example of a new way of looking at buildings and designing sustainability and carbon reduction right from the bottom up. Not only does the idea of removable modular parts put together with just simple connections allow for easy replacement and allowing parts to be re-used elsewhere, but it also gives a flexibility (e.g. laboratories being swapped with offices) and recognises that state-of-the-art is constantly changing. This allows the building to be more easily kept up to date. The fact that the Matrix ONE building can also meet ambitious Amsterdam targets for energy use (and is certified BREEAM-Excellent) is another big bonus. It could be, as MVRDV partner Frans de Witte says, this is an example of how buildings work in the future which would mean some substantial changes and new opportunities in the construction industry as well as architecture.