CLTC Research Exchange, Day 3: Long-Term Security Implications of AI/ML Systems
January 15, 2021, by cltc2015
Read more on CLTC
How triggerless backdoors could dupe AI models without manipulating their input data
December 21, 2020, by Ben Dickson
Read more on The Next Web
The Data Problem Stalling AI
December 8, 2020, by Gregory Vial, Jinglu Jiang, Tanya Giannelia, and Ann-Frances Cameron. Gregory Vial is an assistant professor of IT at HEC Montréal. Jinglu Jiang is an assistant professor of management information systems at Binghamton University. Tanya Giannelia is a Ph.D. student in information technologies at HEC Montréal. Ann-Frances Cameron is an associate professor of IT at HEC Montréal.
Read more on MIT Sloan Management Review
When AI Systems Fail: Introducing the AI Incident Database
November 18, 2020, by Sean McGregor
This Company Uses AI to Outwit Malicious AI
December 2, 2020, by Will Knight
Read more on WIRED
A neural network learns when it should not be trusted
November 20, 2020, by Daniel Ackerman | MIT News Office
The way we train AI is fundamentally flawed
November 19, 2020, by Will Heaven
Read more on Top News - MIT Technology Review
AI can protect all energy firms from cyberattack. Here’s how
November 17, 2020, by Leo Simonovich
Read more on World Economic Forum | Agenda | feed
A Scoville Heat Scale For Measuring The Progress Of Emerging Technologies In 2021
November 16, 2020, by Chuck Brooks, Contributor
Read more on Forbes - AI
Commentary: ‘You may be hacked’ and other things doctors should tell you
November 16, 2020, by Channel NewsAsia
Read more on Channel NewsAsia