SAFETY & ROBUSTNESS

How triggerless backdoors could dupe AI models without manipulating their input data

1-copy-23-796x417.jpg
In the past few years, researchers have shown growing interest in the security of artificial intelligence systems. There’s a special interest in how malicious actors can attack and compromise machine learning algorithms, the subset of AI that is being increasingly used in different domains. Among the security issues being studied are backdoor attacks, in which a bad actor hides malicious behavior in a machine learning model during the training phase and activates it when the AI enters production. Until now, backdoor attacks had certain practical difficulties because they largely relied on visible triggers. But new research by AI scientists at the…

This story continues at The Next Web






Read more on The Next Web

You may also like

news & analysis
SAFETY & ROBUSTNESS

The Data Problem Stalling AI

December 8, 2020, by Gregory Vial, Jinglu Jiang, Tanya Giannelia, and Ann-Frances Cameron. Gregory Vial is an assistant professor of IT at HEC Montréal. Jinglu Jiang is an assistant professor of management information systems at Binghamton University. Tanya Giannelia is a Ph.D. student in information technologies at HEC Montréal. Ann-Frances Cameron is an associate professor of IT at HEC Montréal.

Image courtesy of Michael Austin/theispot.com The Research The authors’ research approach is based on a qualitative multiple-case-study ...

Comments are closed.