Despite the fact that the ability of a human eye can be metaphorically compared to the one of facial recognition systems, the results they can produce are far from being identical. While looking at the same picture, AI and humans can identify different people, characteristics of gender, age, hair colors, and even races.
This can be done because of the biases and security vulnerabilities of AI called adversarial examples. Such examples can be used by cybercriminals to hack facial recognition systems, autonomous car’s, medical imaging financial algorithms or any other AI technology.
We have every reason to believe that anyone can fool AI. For this, we have created a virtual unconventional Art Exhibition. It includes 100 “Mona Lisa” paintings - all look almost original for people, though AI recognizes them as 100 different celebrities. You can try it to make sure.
Official world's first Art Gallery with exploits that can deceive AI is open now. Trick artificial intelligence!
Go to Art ExhibitionOh my this is so very clever: this is the sort of practical experiment we need to demonstrate the fundamental differences between human and artificial knowledge agents. If there were a #datagovernance Oscar, I’m nominating Adversa AI. Here’s why:
1. Leonardo is known for playing with his audience
2. The Mona Lisa is one of his greatest tricks
3. Humans know this but just see 100 copies (note this is now a 100% probability distribution)
4. We know that the human eye is much better than the artificial eye… but apparently no longer
…
So now we are vainly looking for details in a painting which made an art out of hiding details because our tricorder just told us this is a wall with 100 different faces.
Wow. This made my week…Rohan Light, SPA at Capital & Coast District Health Board,
Fellow at The RSA
The exhibition is predicated on the concept of an NFT sale. Security professionals who might dismiss NFTs as popular contemporary gimmickry should not be put off – the concept is used merely to attract a wider public audience to the insecurity of facial recognition. The purpose of the exhibition is altogether more serious than NFTs.
I think it’s the most creative security research campaign I’ve ever seen.
In such a way we want to attract public attention to the problem of insecurity of artificial intelligence and a need to combine efforts for protecting AI.
We plan to release detailed information on how we made it possible later. So you can subscribe to be the first who will know the technical information on this project.
We are not planning to earn anything from this sale. If there is something meaningful, it will be spent on funding public activities related to trusted AI initiatives that will be open to the community.