Aniruddha Saha: Backdoor Attacks in Computer Vision: Towards Adversarially Robust Machine Learning Models 

/ May 18, 2022/

When:
May 20, 2022 @ 11:00 am – 12:00 pm
2022-05-20T11:00:00-04:00
2022-05-20T12:00:00-04:00

Aniruddha Saha

Ph.D. Candidate, Computer Science

University of Maryland, Baltimore County

 

Title: Backdoor Attacks in Computer Vision: Towards Adversarially Robust Machine Learning Models

Abstract: An adversary is a person with malicious intent whose goal is to disrupt the normal functioning of a machine learning pipeline. Research has shown that an adversary can tamper with the training process of a model by injecting misrepresentative data (poisons) into the training set. Moreover, if provided control over the training process as a third party, they can deliver a model which deviates from normal behavior. These are called backdoor attacks. The manipulation is done in a way that the victim’s model will malfunction only when a trigger is pasted on a test input. For instance, a backdoored model in a self-driving car might work accurately for days before it fails to detect a pedestrian when the adversary decides to exploit the backdoor. Vulnerability to backdoor attacks is dangerous when deep learning models are deployed in safety-critical applications.

In this talk, I show ways in which state-of-the-art deep learning methods for computer vision are vulnerable to backdoor attacks. A stealthy feature-collision based hidden trigger backdoor attack for image classification is introduced. This attack allows an adversary to keep the trigger hidden in the poisons for evading detection. Then I show that state-of-the-art self-supervised methods for learning visual representations which rely on similarity of augmented views are vulnerable to backdoor attacks. Backdoor attacks are more practical in self-supervised learning because it is not common practice to inspect large-scale unlabeled data before training. Existence of these vulnerabilities calls for development of defense methods. I present a method to detect backdoored models which is fast and universally effective across triggers and model architectures.

I believe research in this area is essential for building more robust, safe, and trustworthy deep learning methods.

Bio: Aniruddha Saha is a Ph.D. Candidate in Computer Science at the University of Maryland, Baltimore County, advised by Dr. Hamed Pirsiavash. His current research is in computer vision and adversarial machine learning. His interests include trustworthy machine learning, self-supervised learning, learning from limited data, medical imaging, computational photography, and computer vision in sports.

 

Join Zoom Meeting
https://wse.zoom.us/j/97253571804

Meeting ID: 972 5357 1804
One tap mobile
+13017158592,,97253571804# US (Washington DC)
+13126266799,,97253571804# US (Chicago)

Dial by your location
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 669 900 6833 US (San Jose)
Meeting ID: 972 5357 1804
Find your local number: https://wse.zoom.us/u/aPGHr7JfS

Join by SIP
[email protected]

Join by H.323
162.255.37.11 (US West)
162.255.36.11 (US East)
115.114.131.7 (India Mumbai)
115.114.115.7 (India Hyderabad)
213.19.144.110 (Amsterdam Netherlands)
213.244.140.110 (Germany)
103.122.166.55 (Australia Sydney)
103.122.167.55 (Australia Melbourne)
149.137.40.110 (Singapore)
64.211.144.160 (Brazil)
149.137.68.253 (Mexico)
69.174.57.160 (Canada Toronto)
65.39.152.160 (Canada Vancouver)
207.226.132.110 (Japan Tokyo)
149.137.24.110 (Japan Osaka)
Meeting ID: 972 5357 1804

 

Share this Post