Towards Trusted AI Week 40 – Deepfake fools both humans and smart assistants

Secure AI Weekly Jelena Sh todayOctober 11, 2021 119

Background
share close

Deepfakes are one of the biggest problems of the current artificial intelligence 


AI-generated deepfake voices can fool both humans and smart assistants

New Scientist, October 8, 2021

Deepfakes have long been one of the big problems that have brought smart intelligence systems into human life with their amazing ability to fake photos, videos and audio files.

The fact that many of these technologies are free makes them even more accessible. The article talks about the research on the deepfake technology by Emily Wenger and touches software available to a wide audience that is not only capable of imitating the voice of a specific person – it does it as well as it can deceive both people and smart devices that are activated by voice commands. One of the goals of the work was to investigate audio versions of these tools that generate realistic English speech based on a human voice sample. As a result of the study, a disappointing discovery was made: both people and machines can be deceived by synthetic speech, while the existing means of protection against synthesized speech do not work. The study highlights the need to raise awareness and develop new protections against synthetic speech for both humans and machines.

AI Weekly: EU facial recognition ban highlights need for U.S. legislation

Venture Beat, October 8, 2021

Last week, a resolution calling for a ban on law enforcement use of facial recognition technology in public place was introduces by The European Parliament, the body responsible for adopting European Union (EU) legislation, aiming to restrict the use of remote biometric identification unless the case is supposed to be a “serious” crime, including kidnapping and terrorism.

However, this approach differs from what is used in the US agencies – they continue to use facial recognition, even though there is research proving the presence of possible ethnic, racial and gender bias. The U.S. The Government Accountability Office noted in a recent report that 10 offices, including the ministries of agriculture, commerce, defense and homeland security, plan to expand their use of facial recognition between 2020 and 2023 as they deploy as many as 17 different facial recognition systems.

Concurrently, independent tests of vendor systems by the Gender Shades Project and others have shown that facial recognition technologies are susceptible to a range of biases, compounded by misuse in the field. And, as the EU initiatives show, it is clear that more detailed guidelines will be needed to regulate facial recognition technologies used in the public sector.

Deepfake technology could soon allow anyone to create Hollywood-quality visual effects

CBS News, October 10, 2021

Experts say that in the next few years,  we will all be able to create photorealistic videos and sound recordings using artificial intelligence software and the next-generation “synthetic media” will be fully created by computers.

Bill Whitaker wanted to test a breathtaking deepfake transformation himself in which he can becomes 30 years younger. The video was be broadcast in the last edition of 60 Minutes, October 10 on CBS. In the video6 he demonstrated the magic of synthetic media, that enables the creation of photo-realistic videos and sound recordings completely generated by computers.

Nina Schick, a London-based researcher and political consultant says the following: “The incredible thing about deepfakes and synthetic media is the pace of acceleration when it comes to the technology,” Schick comments. “By five to seven years, we are basically looking at a trajectory where any single creator — so a YouTuber, a TikToker — will be able to create the same level of visual effects that is only accessible to the most well-resourced Hollywood studio today.”

Written by: Jelena Sh

Rate it
Previous post