16th ACM Workshop on
Artificial Intelligence and Security
November 30, 2023 — Copenhagen
co-located with the 30th ACM Conference on Computer and Communications Security
Photo: Pixabay

Keynotes

Title: When decentralization, security, and privacy are not friends

Carmela Troncoso, Associate Professor @ EPFL

Carmela Troncoso is an Associate Professor at EPFL (Switzerland) where she heads the SPRING Lab. Her work focuses on analyzing, building, and deploying secure and privacy-preserving systems. Troncoso holds a Ph.D. in engineering from KULeuven. Her work on privacy engineering has received multiple awards, and she has been named 40 under 40 in technology by Fortune in 2020.

Decentralization is often seen as a main tool to achieve security and privacy. It has worked in a number of systems, for which decentralization help protect identities and data of users. Thus, it is not a surprise that a new trend of machine learning algorithms opt for decentralization to increase data privacy. In this talk, we will analyze decentralized machine learning proposals and show how they not only don’t improve privacy or robustness, but also increase the surface of attack resulting in less protection than federated alternatives.

Title: Emerging challenges in securing frontier AI systems

Mikel Rodriguez, AI Red Teaming @ Google Deepmind

Dr. Mikel Rodriguez has spent over two decades working in the public and private sector securing the application of Artificial Intelligence in high-stakes consequential environments. At Google DeepMind, Mikel defines and leads the cross-functional AI Red and Blue “ReBl” team to ensure that foundational models are battle-tested with the rigor and scrutiny of real-world adversaries, and help drive research and tooling that will make this red-blue mindset scalable in preparation for AGI. In his role as the Managing Director at MITRE Labs, Mikel built and led the AI Red Team that focuses on deployed AI systems that can be susceptible to bias in their data, attacks involving evasion, data poisoning, model replication; and the exploitation of software flaws to deceive, manipulate, compromise, and render them ineffective. Mikel’s team worked on developing methods to mitigate bias and defend against emerging ML attacks, securing the AI supply chain, and generally ensuring the trustworthiness of AI systems so they perform as intended in mission-critical environments. While at MITRE, his team in collaboration with many industry partners, published ATLAS (Adversarial Threat Landscape for AI Systems) - a knowledge base of adversary tactics, techniques, and case studies for machine learning (ML) systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research. Mikel firmly believes that AI’s potential will only be realized through collaborations that help produce reliable, resilient, fair, interpretable, privacy preserving, and secure technologies. Mikel received his Ph.D. in 2010 while working at University of Central Florida’s computer vision lab with professor Mubarak Shah. He then moved to Paris where he worked as a post-doctoral research fellow at INRIA.

As advanced AI assistants have become more general purpose, sophisticated and capable, they create new opportunities in a variety of fields, such as education, science and healthcare. Yet the rapid speed of progress has made it difficult to adequately prepare for, or even understand, security and privacy vulnerabilities that may emerge as a result of these new capabilities. Several foreseeable developments in advanced AI assistants including tool use, multimodality, planning and deeper reasoning, and memory have the potential to significantly expand the security and misuse risk profile of these systems. In this talk we will explore a number of best practices and future research directions that can help us better prepare society for managing these risks.

Title: Trustworthy AI and A Cybersecurity Perspective on Large Language Models

Mario Fritz, Faculty @ CISPA Helmholtz Center for Information Security

Prof. Dr. Mario Fritz is a faculty at the CISPA Helmholtz Center for Information Security, an honorary professor at Saarland University, and a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS). Until 2018, he led a research group at the Max Planck Institute for Computer Science. Previously, he was a PostDoc at the International Computer Science Institute (ICSI) and UC Berkeley after receiving his PhD from TU Darmstadt and studying computer science at FAU Erlangen-Nuremberg. His research focuses on trustworthy artificial intelligence, especially at the intersection of information security and machine learning. He is Associate Editor of the journal "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and has published over 100 articles in top conferences and journals. Currently, he is coordinating the Network of Excellence in AI "ELSA -- European Lighthouse on Secure and Safe AI" which is an ELLIS (https://ellis.eu/) initiative that is funded by the EU and connects universities, research institutes, and industry partners across Europe (elsa-ai.eu).

As AI technology is getting increasingly mature, we see a broad deployment of AI in many application domains. However, this increases the demands on properties related to trustworthiness like robustness, privacy, transparency, accountability as well as explainability. Besides the trustworthiness of AI, misinformation and deepfakes are becoming key concerns in terms of the negative effects that AI can have on society. I'll discuss the larger ecosystem around misinformation and different approaches to mitigate these pressing issues in the future. Finally, Large Language Models (LLMs) like GPT4 have demonstrated how AI deployment is reaching millions of users, which in turn puts a magnifying glass on some of the issues mentioned before. I'll demonstrate cybersecurity concerns and threats that emerge from the recent trend of application-integrated LLMs and AI assistants as well as sketch how future development will initiate new research challenges in this domain.

Programme

The following times are on CET (UTC +1).

09:00–9:15 Opening and Welcome
9:15–10:00 Keynote 1
When decentralization, security, and privacy are not friends
Carmela Troncoso, Associate Professor @ EPFL
10:00–10:20 Coffee break
10:20-11:00 Spotlights
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
Authors: Benoit Coqueret (Univ. Rennes, Inria), Mathieu Carbone (Thales ITSEF), Olivier Sentieys (Univ. Rennes, Inria), Gabriel Zaid (Thales ITSEF)
Lookin' Out My Backdoor! Investigating Backdooring Attacks Against DL-driven Malware Detectors
Authors: Mario D'Onghia (Politecnico di Milano), Federico Di Cesare (Politecnico di Milano), Luigi Gallo (Cyber Security Lab, Telecom Italia), Michele Carminati (Politecnico di Milano), Mario Polino (Politecnico di Milano), Stefano Zanero (Politecnico di Milano)
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Authors: Sahar Abdelnabi (CISPA Helmholtz Center for Information Security), Kai Greshake (Saarland University, sequire technology GmbH), Shailesh Mishra (Saarland University), Christoph Endres (sequire technology GmbH), Thorsten Holz (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security)
Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning
Authors: Chris Hicks (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Myles Foley (Imperial College London), Thomas Davies (The Alan Turing Institute), Kate Highnam (Imperial College London), Tim Watson (The Alan Turing Institute)
11:00–12:00 Poster session 1
12:00–13:30 Lunch
13:30–14:15 Keynote 2
Emerging challenges in securing frontier AI systems
Mikel Rodriguez, AI Red Teaming @ Google Deepmind
14:15–14:45 Break
14:45–15:30 Keynote 3
Trustworthy AI and A Cybersecurity Perspective on Large Language Models
Mario Fritz, Faculty @ CISPA Helmholtz Center for Information Security
15:30–16:30 Poster session 2
16:30–16:45 Closing remarks

Accepted Papers

You can find the accepted papers in the proceedings.

Privacy-Preserving Machine Learning (Poster session 1)
Differentially Private Logistic Regression with Sparse Solutions
Authors: Amol Khanna (Booz Allen Hamilton), Fred Lu (Booz Allen Hamilton; University of Maryland, Baltimore County), Edward Raff (Booz Allen Hamilton; University of Maryland, Baltimore County), Brian Testa (Air Force Research Laboratory)
Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models
Authors: Florian A. Hölzl (Artifical Intelligence in Medicine, Technical University of Munich), Daniel Rueckert (Artifical Intelligence in Medicine, Technical University of Munich), Georgios Kaissis (Artifical Intelligence in Medicine, Technical University of Munich)
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile
Authors: Tyler LeBlond (Booz Allen Hamilton), Joseph Munoz (Booz Allen Hamilton), Fred Lu (Booz Allen Hamilton), Maya Fuchs (Booz Allen Hamilton), Elliot Zaresky-Williams (Booz Allen Hamilton), Edward Raff (Booz Allen Hamilton), Brian Testa (Air Force Research Laboratory)
Information Leakage from Data Updates in Machine Learning Models
Authors: Tian Hui (The University of Melbourne), Farhad Farokhi (University of Melbourne), Olga Ohrimenko (The University of Melbourne)
Membership Inference Attacks Against Semantic Segmentation Models
Authors: Tomas Chobola (Helmholtz AI), Dmitrii Usynin (Department of Computing, Imperial College London; Artificial Intelligence in Medicine and Healthcare, TUM), Georgios Kaissis (Artificial Intelligence in Medicine and Healthcare, TUM; Institute for Machine Learning in Biomedical Imaging, Helmholtz Zentrum München; Department of Computing, Imperial College London)
Utility-preserving Federated Learning
Authors: Reza Nasirigerdeh (Technical University of Munich), Daniel Rueckert (Technical University of Munich), Georgios Kaissis (Technical University of Munich)
Machine Learning for Cybersecurity (Poster session 1)
Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks
Authors: Daniel Gibert (CeADAR, University College Dublin), Giulio Zizzo (IBM Research Europe), Quan Le (CeADAR, University College Dublin)
AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale Malware Corpora
Authors: Robert J. Joyce (Booz Allen Hamilton, University of Maryland Baltimore County), Tirth Patel (University of Maryland Baltimore County), Charles Nicholas (University of Maryland Baltimore County), Edward Raff (Booz Allen Hamilton, University of Maryland Baltimore County)
Drift Forensics of Malware Classifiers
Authors: Theo Chow (King's College London), Zeliang Kan (King's College London), Lorenz Linhardt (Technische Universität Berlin), Lorenzo Cavallaro (University College London), Daniel Arp (Technische Universität Berlin), Fabio Pierazzi (King's College London)
Lookin' Out My Backdoor! Investigating Backdooring Attacks Against DL-driven Malware Detectors
Authors: Mario D'Onghia (Politecnico di Milano), Federico Di Cesare (Politecnico di Milano), Luigi Gallo (Cyber Security Lab, Telecom Italia), Michele Carminati (Politecnico di Milano), Mario Polino (Politecnico di Milano), Stefano Zanero (Politecnico di Milano)
Reward Shaping for Happier Autonomous Cyber Security Agents
Authors: Elizabeth Bates (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Chris Hicks (The Alan Turing Institute)
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors
Authors: Biagio Montaruli (SAP Security Research, EURECOM), Luca Demetrio (Università degli Studi di Genova), Maura Pintor (University of Cagliari), Battista Biggio (University of Cagliari), Luca Compagna (SAP Security Research), Davide Balzarotti (EURECOM)
Machine Learning Security (Poster session 2)
Certifiers Make Neural Networks Vulnerable to Availability Attacks
Authors: Tobias Lorenz (CISPA Helmholtz Center for Information Security), Marta Kwiatkowska (University of Oxford), Mario Fritz (CISPA Helmholtz Center for Information Security)
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Authors: Sahar Abdelnabi (CISPA Helmholtz Center for Information Security), Kai Greshake (Saarland University, sequire technology GmbH), Shailesh Mishra (Saarland University), Christoph Endres (sequire technology GmbH), Thorsten Holz (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security)
Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning
Authors: Chris Hicks (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Myles Foley (Imperial College London), Thomas Davies (The Alan Turing Institute), Kate Highnam (Imperial College London), Tim Watson (The Alan Turing Institute)
The Adversarial Implications of Variable-Time Inference
Authors: Dudi Biton (Ben Gurion University of the Negev), Aditi Misra (University of Toronto), Efrat Levy (Ben Gurion University of the Negev), Jaidip Kotak (Ben Gurion University of the Negev), Ron Bitton (Ben Gurion University of the Negev), Roei Schuster (Wild Moose), Nicolas Papernot (University of Toronto and Vector Institute), Yuval Elovici (Ben Gurion University of the Negev), Ben Nassi (Cornell Tech)
Dictionary Attack on IMU-based Gait Authentication
Authors: Rajesh Kumar (Bucknell University), Can Isik (Syracuse University), CHILUKURI MOHAN (Syracuse University)
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
Authors: Benoit Coqueret (Univ. Rennes, Inria), Mathieu Carbone (Thales ITSEF), Olivier Sentieys (Univ. Rennes, Inria), Gabriel Zaid (Thales ITSEF)
Task-Agnostic Safety for Reinforcement Learning
Authors: Md Asifur Rahman (Wake Forest University), Sarra Alqahtani (Wake Forest University)
Broken Promises: Measuring Confounding Effects in Learning-based Vulnerability Discovery
Authors: Erik Imgrund (SAP Security Research), Tom Ganz (SAP Security Research), Martin Härterich (SAP Security Research), Niklas Risse (Max-Planck-Institute for Security and Privacy), Lukas Pirch (Technische Universität Berlin), Konrad Rieck (Technische Universität Berlin)
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition
Authors: Luke E. Richards (University of Maryland, Baltimore County), Edward Raff (University of Maryland, Baltimore County; Booz Allen Hamilton), Cynthia Matuszek (University of Maryland, Baltimore County)

Best Paper Award

As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we awarded the best paper, selected by the reviewers among all the submitted papers.

The 2023 AISec Best Paper Award was given to:
Sahar Abdelnabi (CISPA Helmholtz Center for Information Security), Kai Greshake (Saarland University, sequire technology GmbH), Shailesh Mishra (Saarland University), Christoph Endres (sequire technology GmbH), Thorsten Holz (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security) for the paper Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection.

Committee

Workshop Chairs

Steering Committee

Program Committee

  • Alessandro Brighente (University of Padova)
  • Ambra Demontis (University of Cagliari)
  • Andy Applebaum (Apple)
  • Angelo Sotgiu (CINI Consortium / University of Cagliari)
  • Ankit Gangwal (IIIT Hyderabad)
  • Antonio Emanuele Cinà (University of Genoa)
  • Arjun Nitin Bhagoji (University of Chicago)
  • Azqa Nadeem (TU Delft)
  • Battista Biggio (University of Cagliari)
  • Benjamin M. Ampel (University of Arizona)
  • Bobby Filar (Sublime Security)
  • Boyang Zhang (CISPA Helmholtz Center for Information Security)
  • Brad Miller (Twitter)
  • Chawin Sitawarin (UC Berkeley)
  • Christian Wressnegger (Karlsruhe Institute of Technology (KIT))
  • Clarence Chio (UC Berkeley)
  • Clinton Cao (Delft University of Technology)
  • Daniele Angioni (Università degli Studi di Cagliari)
  • Daniël Vos (Delft University of Technology )
  • Davide Maiorca (University of Cagliari, Italy)
  • Dmitrijs Trizna (University of Genova, Microsoft, Sapienza University of Rome)
  • Dongdong She (Columbia University/HKUST)
  • Edoardo Debenedetti (ETH Zurich)
  • Erwin Quiring (ICSI Berkeley, Ruhr University Bochum)
  • Fabio De Gaspari (Sapienza University of Rome)
  • Giacomo Quadrio (University of Padova)
  • Giorgio Piras (University of Cagliari)
  • Giorgio Severi (Northeastern University)
  • Giovanni Apruzzese (University of Liechtenstein)
  • Giulio Rigoni (University of Padua)
  • Hari Venugopalan (UC Davis)
  • Ilia Shumailov (University of Oxford)
  • Javier Carnerero Cano (Imperial College London)
  • Kathrin Grosse (EPFL)
  • Kexin Pei (Columbia University)
  • Lorenzo Cavallaro (University College London)
  • Luca Demetrio (Università degli Studi di Genova)
  • Luis Muñoz-González (Imperial College London)
  • Maria Rigaki (Czech Technical University)
  • Matthew Jagielski (Google)
  • Mauro Conti (University of Padua, TU Delft)
  • Pratyusa Manadhata (Meta)
  • Raouf Kerkouche (CISPA Helmholtz Center for Information Security)
  • Sagar Samtani (Indiana University)
  • Sahar Abdelnabi (CISPA Helmholtz Center for Information Security)
  • Sam Bretheim (Craigslist)
  • Sanghyun Hong (Oregon State University)
  • Scott Coull (Google)
  • Shiqi Wang (Amazon)
  • Shrikant Tangade (University of Padova, CHRIST University)
  • Thijs van Ede (University of Twente)
  • Tobias Lorenz (CISPA Helmholtz Center for Information Security)
  • Tom Ganz (SAP SE)
  • Vera Rimmer (DistriNet, KU Leuven)
  • Vikash Sehwag (Princeton University)
  • Vinod Puthuvath (Marie Curie Fellow, Cochin University)
  • Yang Zhang (CISPA Helmholtz Center for Information Security)
  • Yash Vekaria (University of California, Davis)
  • Zied Ben Houidi (Huawei Technologies Co. Ltd.)
  • Ziqi Yang (Zhejiang University)