AI Security Laboratory: Hands-On + Full-Stack (Lifetime Lab Access)
Overview
AI has introduced an entirely new layer of security risk — one that needs to be understood from both attacker and builder perspectives. This training is a hands-on, full-stack guide to that landscape, showing how modern AI systems are attacked, built, and used in real-world security.
You will work through the offensive side of AI security with prompt injection, jailbreaking, and fuzzing of LLM applications. You will also apply AI in security engineering and daily operations — turn AI building blocks into practical workflows and use AI security tools for everyday tasks.
Along the way, you will move into more advanced capabilities — building agentic AI for real-time security operations, applying AI to vulnerability research and PoC development, and exploring how smarter AI can assess other AI.
The training includes hands-on exercises, reusable Python scripts, and lifetime lab access — so you can continue practicing and applying what you learned long after the class ends.
Key Learning Objectives
– prompt injection: direct and indirect
– LLM jailbreaking
– fuzzing LLM applications
– AI-powered shell
– advanced prompting
– local LLMs / private AI
– AI programming
– AI attack detection
– OpenAI models and API
– embeddings
– quantization
– LLM Guard
– building agentic AI
– creating your own prediction model
– CVE research / PoC development with AI
– smarter AI assessing other AI
– specialized AI security tools
– and more …
Topics Covered
1) AI/LLM attack vectors, including various forms of prompt injection and LLM jailbreaking techniques
2) AI programming for security practitioners — working with local LLMs / private AI and cloud models (OpenAI / API), and building AI workflows for security use cases (e.g. fully private AI setups)
3) AI attack detection, including the use of local LLMs, anonymizing data before sending it to cloud LLM providers, and applying open-source defenses such as LLM Guard
4) Fuzzing LLM applications, which differs from traditional fuzzing due to the non-deterministic nature of modern LLMs
5) Using AI in security practitioners‘ daily operations — including AI-powered shell, advanced prompting, and AI security tools
6) Ready-to-use Python scripts, providing hands-on experience and reusable AI building blocks for daily security tasks
7) Smarter AI assessing other AI, along with interesting AI techniques and projects for security practitioners
8) CVE research and PoC development with AI
9) Building agentic AI for real-time security operations
10) and more …
What Students Will Receive
Students will be given a VMware image with a specially prepared lab environment to work on many topics and exercises in this training. When the training is over, students can take the lab environment home (after signing a non-disclosure agreement) to continue practicing at their own pace.
What Students Say About My Trainings
Recommendations are available on my LinkedIn profile (https://www.linkedin.com/in/dawid-czagan-85ba3666/) – training participants from companies such as Oracle, Adobe, ESET, ING, Red Hat, Trend Micro, Philips, the government sector, and others.
What Students Should Know
Students should have a general understanding of application security and some experience with web technologies and APIs. Basic familiarity with programming or scripting, security testing practices, and working with the command line is recommended.
What Students Should Bring
Students will need a laptop with 64-bit operating system, at least 16 GB RAM, 120 GB free hard drive space, administrative access, ability to turn off AV/firewall and VMware Player/Fusion installed (64-bit version). Prior to the training, make sure there are no problems with running x86_64 VMs. You will need an OpenAI API key (required). A Lakera API key is optional.
As AI is an evolving field, additional requirements may be shared ahead of the training if needed.

