/ Machine Learning Seminar of Dr. Ambra Demontis – “Adversarial Machine Learning: Attacking and Defending Machine Learning Systems”

Machine Learning Seminar of Dr. Ambra Demontis – “Adversarial Machine Learning: Attacking and Defending Machine Learning Systems”

October 19, 2020
7:00 pm - 8:00 pm

Virtual: Webinar, https://auckland.zoom.us/j/98241381875

Short Bio

Ambra Demontis is an Assistant Professor at the University of Cagliari, Italy. She received her M.Sc. degree (Hons.) in Computer Science and her Ph.D. in Electronic Engineering and Computer Science from the University of Cagliari, Italy, respectively, in 2014 and 2018.  In 2016, she was Visiting Student at The University of Manchester, in “The Machine Learning and Optimisation group” guided by Prof. Dr. Gavin Brown. In October 2019, she was Visiting Professor at the Northwestern Polytechnical University (NPU), Xi’an, where she held some lectures about Adversarial Machine Learning. Since 2014 she is a member of the PRALab group.Her research interests include secure machine learning, kernel methods, biometrics, and computer security. She serves on the program committee of several conferences and workshops, such as IJCAI and DLS, and as a reviewer for several journals, such as TNNLS and Pattern Recognition.

Abstract

With the increasing availability of data and computing power, data-driven AI and machine-learning algorithms have recorded an unprecedented success in many different applications. Recent deep-learning algorithms used for perception tasks like image recognition have even surpassed human performance on some specific datasets, such as the famous ImageNet. Despite their accuracy, it is known that skilled attackers can easily mislead those algorithms. In this talk, Ambra will focus on two of the most famous attacks that can be perpetrated against machine learning systems: evasion and poisoning. Evasion attacks allow the attacker to have a specific sample misclassified, modifying that sample. E.g., the attacker changes his/her malicious program to have it misclassified by a machine learning-based antivirus as legitimate. Poisoning attacks, instead, allow the attacker to have one or more samples misclassified without even modifying those samples. Ambra will start this talk by briefly introducing these attacks and explaining how they can be performed when the attacker has full knowledge of the system that he/she would like to attack (his/her target). Often, in practice, attackers do not have full knowledge of their target system. For example, cybersecurity companies usually avoid disclosing details about their antivirus. Interestingly, attackers can often compute effective attacks even without such knowledge. Ambra will explain how such attacks are performed and talk about some related findings, including challenges,  open problems and defenses against these attacks. Finally, she will present SecML, a library developed by Pluribus One and PRALab, that allows to quickly evaluate the security of a machine learning system against the abovementioned attacks. The talk will consider different application examples, including object recognition in images and cybersecurity-related tasks such as malware detection.
 
To join, follow this link: https://auckland.zoom.us/j/98241381875