Registration 08:00 – 17:00 per day at 
  
    Level 1, Hilton Surfers Paradise
  
	Registration opens at 07:30 for day 1
 
  Keynote & Session A: The Grand Ballroom, Level 1, Hilton Surfers Paradise
  Session B: The Promenade Room, Level 1, Hilton Surfers Paradise
| 08:30 - 09:00 | Opening Remark | 
| 09:00 - 10:00 | Keynote I | 
         
          Battista BiggioWild Patterns: Twenty Years of Attacks and Defenses in Machine Learning Security Abstract: Over the past two decades, machine learning security has evolved through a continuous arms race between attacks and defenses. From early evasion and poisoning attacks on spam and malware detectors to the rise of adversarial examples, researchers have repeatedly exposed the fragility of modern AI models. Despite notable progress, no bulletproof defense has emerged, and many countermeasures have proven ineffective against more sophisticated attacks. In this talk, I will provide a historical perspective on adversarial machine learning—from the initial, simplistic perturbation models to more complex attacks in security-related tasks, up to today’s challenges involving large language and foundation models. I will critically examine the key factors still hindering progress, including the lack of a systematic and scalable framework for evaluating models under adversarial and out-of-distribution conditions, and the need for better debugging tools to uncover common evaluation flaws, dataset biases, and spurious correlations. I will present recent results from our laboratory addressing some of these limitations and discuss how integrating AI as a component within complex, well-engineered systems may foster the development of more resilient and trustworthy intelligent technologies. Bio: Battista Biggio (MSc 2006, PhD 2010) is a Full Professor of Computer Engineering at the University of Cagliari, Italy, and research co-director of AI Security at the sAIfer lab (www.saiferlab.ai). He has been attacking machine-learning (ML) models well before adversarial examples were even discovered, in the context of cybersecurity-related applications like spam filtering, malware detection, web security, and biometric recognition (PRJ 2018). His team has been the first to formalize attacks on ML models as optimization problems, and demonstrate gradient-based evasion (ECML-PKDD 2013) and poisoning (ICML 2012) attacks on machine-learning algorithms, playing a leading role in the establishment and advancement of this research field. His seminal paper on “Poisoning Attacks against Support Vector Machines” won the 2022 ICML Test of Time Award. His work on “Wild Patterns” won the 2021 Best Paper Award and Pattern Recognition Medal from Elsevier Pattern Recognition. Prof. Biggio has managed several industrial, national, and EU-funded projects, and regularly serves as Area Chair for top-tier conferences in machine learning and computer security, like NeurIPS and the IEEE Symposium on Security and Privacy. He is an Associate Editor-in-Chief of Pattern Recognition and chaired IAPR TC1 (2016-2020). He is a Fellow of IEEE and AAIA, a Senior Member of ACM, and a member of IAPR and ELLIS.  | 
    |
| 10:00 - 10:30 | Tea Break | 
| 10:30 - 12:00 | |
| 12:00 - 13:30 | Lunch Break | 
| 13:30 - 15:00 | |
| 15:30 - 16:00 | Tea Break | 
| 16:00 - 16:55 | |
| 18:00 | Poster Session and Reception at Sky Point | 
| 09:00 - 10:00 | Keynote II | 
       
        Attacking and Verifying Certified Robustness for Neural Networks Abstract: The fastest way to get your software hacked is to claim it is unbreakable. Yet humanity has produced a few rare systems demonstrably resistant to attack, such as the formally verified seL4 microkernel. Can we hope for similar assurance for today’s most exciting software — machine-learning models? In this keynote, I will argue why ML models resist correctness verification. Much research has instead focused on verifying non-functional properties, like robustness. Unfortunately, these approaches face seemingly inherent scalability challenges. I will present our alternative: verified certified robustness, in which we built a formally verified robustness certifier for neural networks. I will show why verified certification is important, by exposing implementation flaws in existing, unverified certifiers. Finally, I will conclude with our recent discovery of subtle floating-point exploits against our own verified certifier. In doing so I hope to underscore not just the promise but also the open challenges of verified certified robustness---challenges I invite the community to address with us.. Bio: Toby Murray first got hooked on computer security at high school, when he was suspended for hacking the school’s computers. That early curiosity turned into a career after a stint as a graduate with the Department of Defence, and was super-charged during his D.Phil. at Oxford. Today he is a Professor in the School of Computing and Information Systems at the University of Melbourne, where he has led the School’s cybersecurity research, teaching, and engagement activities. He also serves as Director of the Defence Science Institute. His work on rigorously secure systems has been recognised with numerous awards, including the Eureka Prize for Outstanding Science in Safeguarding Australia and the ACM Software System Award.  |  
  |
| 10:00 - 10:30 | Tea Break | 
| 10:30 - 12:00 | |
| 12:00 - 13:30 | Lunch Break | 
| 13:30 - 17:00 | Social Event at SeaWorld (bus pick up at hotel at 13:30) | 
| 17:15 | Conference Banquet & Award Ceremony at Sea World Resort Conference Centre | 
| 20:00 | Return to hotel (bus pick up at the banquet venue) | 
| 10:00 - 10:30 | Morning Tea | 
| 10:30 - 12:00 | |
| 12:00 - 13:30 | Lunch Break | 
| 13:30 - 15:00 | |
| 15:00 - 15:30 | Tea Break | 
| 15:30 - 16:25 | |
| 16:25 - 16:30 | Closing Remark |