format_quote

The lack of transparency of neural models makes them vulnerable to various types of attacks we might not yet be aware of

Vered Shwartz / AI Researcher at Allen Institute for AI and University of Washington

Background
share close

ML-based products need to be tested for correctness, robustness, fairness, security, and privacy aspects before they are deployed. The pace of research in fields that use ML (NLP, vision) has accelerated, and it’s largely a positive thing. But I think it’s beneficial not to rush to deploy every new technology. The lack of transparency of neural models makes them vulnerable to various types of attacks we might not yet be aware of.