Towards trusted AI Week 38 – releasing AI into the wild
Smart tech isn’t always as invincible as we want it to be so think twice before blindly relying on it. Is AI ready to go out in the wild? ZDNet, ...
Secure AI Weekly admin todaySeptember 27, 2020 38
No matter AI can do things that benefits our lives in a countless number of ways, there is always place for teaching smart technologies further.
GNC, September 23, 2020
While a human brain can say when objects are out of place according to the context of a scene, machines are not there yet. It is not only important to teach computer vision systems to detect a number of objects based on their appearance, but also to demonstrate systems what are the most common conditions where these objects can be found. In a yearlong project, researchers at the University of California at Riverside are teaching smart systems what objects normally exist in close to each other. The research was sponsored with a $1 million grant by the Defense Advanced Research Projects Agency and aimed to understand how hackers use adversarial attacks against machine-vision systems. “That is the defense side of things,” Roy-Chowdhury commented. When it comes to attacks, researchers are able to “design attacks so that you don’t misplace the particular object that you are trying to attack. So, you would probably have to change other aspects of the scene, also”.
Technology Review, September 24, 2020
Facebook team is currently working on a new test that is going to check the AI-based technologies implemented by the social network. In the test, AI is going to be set against real people who will try to trick smart technology. Dabbed Dynabench, the test can be as difficult as participants want it to be.
“You end up with a system that is better at the test than humans are but not better at the overall task,” commented Douwe Kiela at Facebook AI Research “It’s very deceiving, because it makes it look like we’re much further than we actually are.” According to Kiela, that the current issue with NLP is that a language model is often good at mimicking language, but you can never say for sure how much the technology understands in fact.
In terms of the test, people will have to interrogate the system: quizzing the models can vary a lot, but the main goal still is to focus on language models as they are most common and comfortable for people to communicate with.
TechCrunch, September 22, 2020
Facebook has recently removes two networks of fake accounts: these included 155 Facebook accounts, 11 pages, 9 groups and 7 Instagram accounts originating from China and 57 accounts, 31 Pages and 20 Instagram accounts from the Philippines. Facebook officials explained that both networks broke company’s rules against “coordinated inauthentic behavior on behalf of a foreign or government entity.” In terms of the case, Facebook collaborated with Graphika, a social analytics company profilizing in disinformation. Later Graphika specialists concluded that fake accounts from China used faces created with the help of an AI technique known as Generative Adversarial Networks, which was available online. “A year ago, this was a novelty,” said Graphika’s Ben Nimmo on Twitter. “Now it feels like every operation we analyse tries this at least once.”
This Chinese campaign consisted of a “small volume” of activity aiming at the U.S. election, however, the actions did not gain much result.
Written by: admin
Secure AI Weekly admin
Smart tech isn’t always as invincible as we want it to be so think twice before blindly relying on it. Is AI ready to go out in the wild? ZDNet, ...
Adversa AI, Trustworthy AI Research & Advisory