Towards Trusted AI Week 45 – Hand-knit to avoid face detection,  adversarial policy attacks blind spots in the AI, and Copilot lawsuit 

Secure AI Weekly + Trusted AI Blog Jelena Sh todayNovember 9, 2022 276

Background
share close

Knitting an anti-surveillance jumper

KDD&Co, November 2, 2022
Kate Davies Designs, Ottilia West

Want a new jumper? But if we say that it may let you avoid facial recognition software?

A software engineer and the author of the new article loves the natural combination of the analogue and the digital, this is the reason why she decided to write about an unconventional jumper, which seemed to be a real piece of art. She decided to check if it is hard to knit a cloth based on Hyperface that might be interpreted and  wrongly  interpreted by facial detection systems.

Spoiler: not very fun to knit, as it was not meant to be hand-knitted. Read the details of the knitting experiment following the link.

 

New Go-playing trick defeats world-class Go AI—but loses to human amateurs

ArsTechnica, November 7, 2022
Benj Edwards

New is a well forgotten past. And in the era of AI, machine learning (ML), and deep learning (DL), Go is becoming an object of research. Go is an abstract strategy board game for two players in which the aim is to surround more territory than the opponent, which was invented more than 2,500 years ago. 

Earlier the best human Go player could defeat the strongest Go-playing AI. But times have changed, and DeepMind’s AlphaGo has dealt with it. And top human professional players are now defeated. AI researchers published a paper describing the details of this noteworthy case. 

“The research shows that AI systems that seem to perform at a human level are often doing so in a very alien way, and so can fail in ways that are surprising to humans… This result is entertaining in Go, but similar failures in safety-critical systems could be dangerous.”

Adam Gleave, a Ph.D. candidate at UC Berkeley, one of the paper’s co-authors

Read more by clicking on the title.

 

US programmer sues Microsoft and OpenAI for open-source piracy 

Siliconrepublic, November 7, 2022
Vish Gain

Microsoft, GitHub and OpenAI face a lawsuit over GitHub Copilot. This is an AI-based tool that is able to help developers and predict text for programming to write code easier and faster. Copilot is powered by the OpenAI Codex algorithm, and trained on public repositories. 

A programmer and lawyer Matthew Butterick brought forward the lawsuit, supposing that GitHub, which is owned by Microsoft, trains Copilot on public source codes and uses developers’ work without attribution thus violating their rights. Meanwhile, GitHub has millions of users.

“AI needs to be fair and ethical for everyone. If it’s not, then it can never achieve its vaunted aims of elevating humanity. It will just become another way for the privileged few to profit from the work of the many.”

Matthew Butterick

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

     

    Written by: Jelena Sh

    Rate it
    Previous post