Your Cart
Loading


Attacking AI (Live August 18th and 20th)

Times:

  • August 18 - 10am-5pm MST
  • August 20 - 10am-5pm MST


(Note: the course will be recorded and distributed to students after completion.)


Over the past few years, I've immersed myself in the intersection of offensive security and artificial intelligence, turning curiosity into actionable insights and practical methodologies. This journey has evolved into talks, research papers, and now—this course. Attacking AI isn't theoretical fluff; it's built on extensive hands-on experience, real-world consulting engagements, and deep technical research into attacking and defending AI systems. Throughout this training, you'll uncover precisely how AI systems can be compromised, how to assess their vulnerabilities methodically, and how to strengthen defenses against emerging threats. If you're an offensive security professional, defender, or a technical leader looking to master the cutting edge of AI cybersecurity, this course is your next step.



Course Details


  • Format: Hybrid of lectures, interactive discussions, and hands-on labs
  • Prerequisites: Intermediate cybersecurity knowledge and a basic understanding of AI concepts
  • Class Collaboration: Participants will have access to private Discord channels shared with the "Red Blue Purple AI" course for discussion and resource sharing.


Note: This syllabus is subject to updates, as AI security is a rapidly evolving field.


Purchase!

$2,000

Syllabus

Module 1: The AI Gold Rush

  • Understanding rapid AI adoption and its cybersecurity implications
  • Exploring key industries integrating AI (finance, healthcare, gaming, automotive) and their unique risks
  • Analysis of traditional security vulnerabilities prevalent in AI-driven applications (e.g., input validation, authentication, authorization)

Module 2: Common AI Architectures and Ecosystem Risks

  • Deep dive into the AI development pipeline: model selection criteria, training procedures, deployment strategies
  • Infrastructure components overview: cloud platforms (AWS, Azure, GCP), AI-specific APIs, agentic architectures (autonomous AI agents)
  • Role and security challenges of Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and autonomous AI agents
  • Risks associated with third-party AI tools, open-source models, datasets, and managing AI supply-chain security

Module 3: Understanding AI Threat Modeling

  • Introduction to common AI security threat models (data poisoning, adversarial attacks, inference attacks)
  • Specific methodologies for threat modeling in LLM-based and image-based AI systems
  • Practical group exercise: Threat modeling a real-world enterprise AI deployment scenario

Module 4: Introduction to Prompt Injection

  • Defining prompt injection and differentiating between fuzzing (gradient-based) and logical injection
  • Technical overview of LLM prompt processing, attention mechanisms, and model limitations
  • Basic prompt manipulation techniques (injection of malicious commands, logical constraints bypass)
  • Case studies illustrating real-world prompt injection vulnerabilities

Hands-on Lab: Crafting Prompt Injection Attacks

  • Step-by-step guidance for crafting effective prompt injection scenarios
  • Exercises targeting various public and custom LLM endpoints
  • Discussion on practical defense mechanisms against prompt injection

Module 5: LLM Jailbreaking for Security Professionals

  • Comprehensive overview of LLM jailbreak methods
  • Detailed review of notable jailbreak cases (ChatGPT, Claude, Gemini)
  • Practical considerations and implications of jailbreak attacks

Module 6: Privacy and Ethical Considerations

  • Ethical hacking boundaries specific to AI
  • Privacy implications and compliance considerations (e.g., GDPR)
  • Responsible disclosure practices for AI vulnerabilities

Module 7: AI Red Teaming Methodologies

  • Examination of current industry approaches and best practices in AI red teaming
  • Identification of key organizations and leaders driving AI security testing advancements
  • Case studies showcasing red teaming scenarios in diverse AI ecosystems

Module 8: Attacking AI-Integrated Applications

  • Deep dive into vulnerabilities and risks associated with AI-powered APIs
  • Detailed exploration of API security considerations specific to LLM-integrated systems
  • Real-world scenarios illustrating successful exploits of AI-integrated applications

Module 9: MITRE ATLAS & OWASP AI Top Ten

  • Detailed walkthrough of MITRE’s ATLAS framework tailored for AI adversarial attacks
  • Breakdown of OWASP's Top 10 security vulnerabilities specific to LLM-based applications

Module 10: Emerging Attack Techniques and Research

  • Overview of cutting-edge academic and industry research in AI security
  • Techniques for developing and innovating AI testing methodologies
  • Resources for continuous learning: key academic papers, blogs, and repositories

Module 11: Defensive Countermeasures and AI Hardening

  • Comprehensive strategies for strengthening AI systems against attacks
  • Techniques such as adversarial training, input sanitization, anomaly detection
  • Defensive tooling specifically designed for AI environments
  • Incident response and forensics tailored for AI-specific incidents

Module 12: The Arcanum LLM Assessment Methodology

  • Introduction and step-by-step guide to Arcanum's structured methodology for assessing AI security
  • Detailed checklist, best practices, and actionable guidelines for AI penetration testers

Closing Discussion & Practical Review

  • Final Q&A session addressing complex scenarios from course material