c0c0n 2026

c0c0n is a 19 years old platform that is aimed at providing opportunities to showcase, educate, understand and spread awareness on Information Security, data protection, and privacy...

Venue & Date

c0c0n 3-Day Professional Training

Practical AI Security: Attacking and Defending LLMs, Agents and MCP

Objective

As organizations rapidly integrate Large Language Models (LLMs) into production systems, AI is no longer confined to passive text generation. Modern deployments increasingly involve autonomous AI agents that reason, make decisions, and invoke tools and APIs via standardized mechanisms such as the Model Context Protocol (MCP). While this evolution unlocks powerful new capabilities, it also introduces security risks that fundamentally differ from traditional application and cloud security models.

This comprehensive, hands-on three-day training provides a deep, practical exploration of attacking and defending AI-driven systems across their full lifecycle—from standalone LLM applications, to autonomous agents, to production-scale tool ecosystems. The course is designed for security professionals who must understand not only how AI systems work, but how they fail under adversarial conditions.

Day 1: Securing the Cognitive Core of LLM Applications

The training begins by reframing LLMs as probabilistic decision engines, not deterministic software components. Participants learn why traditional security assumptions break down when applied to prompt-based systems and how trust boundaries shift from code to prompts, context, and model outputs.

Through a series of hands-on labs, participants attack and defend LLM applications using real-world techniques aligned with the OWASP Top 10 for LLM Applications. Topics include prompt injection (direct and indirect), prompt extraction and reflection attacks, jailbreaking techniques, sensitive data leakage, and Retrieval-Augmented

Generation (RAG) poisoning. Participants build vulnerable systems, exploit them, and then implement layered defenses such as prompt hardening, output validation, semantic checks, and access controls. By the end of Day 1, attendees develop a critical mindset: every input, context source, and output from an LLM must be treated as untrusted.

Day 2: Attacking and Defending AI Agents

Day 2 shifts focus from LLMs as responders to AI agents as autonomous actors. Participants explore how modern agent frameworks enable LLMs to plan, reason, and invoke tools with real-world side effects. The course examines how autonomy introduces new failure modes, including excessive agency, unsafe tool invocation, decision manipulation, and workflow abuse.

Participants build AI agents using common agent patterns and then adopt an attacker mindset to exploit them. Through red-team style labs, they perform prompt injection attacks against agents, abuse overly permissive tools, extract secrets, and trigger unintended actions. The training then pivots to defense, covering secure agent design principles such as least privilege for tools, sandboxing and isolation, approval gates for high-risk actions, robust prompt controls, monitoring, and audit logging.

By the end of Day 2, participants understand how insecure agent design can turn AI systems into operational liabilities—and how to apply defense-in-depth strategies to prevent agents from going rogue in production environments.

Day 3: MCP, Tool Supply Chains, and Production-Grade AI Defense

The final day expands the scope to the broader AI tool ecosystem, with a deep dive into the Model Context Protocol (MCP). MCP is treated as a new AI supply chain, analogous to package managers such as npm or pip, with its own unique attack vectors and trust challenges.

Participants learn how MCP-based systems can be compromised through tool shadowing, impersonation, output poisoning, dependency vulnerabilities, and malicious updates. Hands-on labs demonstrate how attackers can bypass previously implemented LLM and agent defenses by exploiting insecure tool integrations. The course then focuses on building production-grade defenses, including zero-trust tool invocation, capability allowlisting, namespacing, mTLS-based authentication, output validation, and cryptographic verification of tools.

The day concludes with architectural design exercises, where participants design secure MCP gateways and observability pipelines, enabling detection, investigation, and response to AI-specific incidents in real-world environments.

Outcome

By the end of the three days, participants will have built, attacked, and defended complete LLM + Agent + MCP systems. They will leave with a practical, end-to-end understanding of modern AI security, reference architectures for secure deployment, and a mindset shift from “how do we make AI work?” to “how can this system be abused—and how do we stop it?”

Course Content

Day 1 – Securing the Cognitive Core: LLM Application Security and RAG Pipelines
The Shift from Traditional Apps to AI-Driven Systems 9:00 – 9:30
  • Reframing security for probabilistic, context-driven systems.
    • Why traditional AppSec models fail for LLMs
    • Deterministic code vs probabilistic reasoning
    • LLMs as decision engines, not chatbots
    • How injection evolves from syntax to semantic manipulation

Hands-On Exploration Participants interact with a basic LLM to observe how prompt variations affect reasoning and outputs.

Core Architecture of LLM Applications and Introduction to RAG 9:30 –11:00

Understanding prompt pipelines, context assembly, and retrieval-augmented generation.

  • System prompts, user prompts, and context merging
  • Where untrusted input enters the pipeline
  • Retrieval-Augmented Generation (RAG) as external memory
  • Why retrieved content is often treated as trusted input
  • Tool invocation vs text generation

Hands-On Lab – Building a Secure Prompt Pipeline Participants build a simple LLM application that injects tool output into prompts and trace untrusted input through the pipeline. Estimated lab time: 30 minutes

Break : 11:00 – 11:15
Prompt Injection and Jailbreaking Deep Dive11:15 – 12:45

Attacking and defending against prompt-level control hijacking.

  • SystDirect and indirect prompt injection
  • Prompt extraction and reflection attacks
  • Jailbreaking techniques: role-play, authority bypass, hypothetical scenarios
  • Why “prompt engineering” is not a security control

Hands-On Lab – Exploiting Prompt Injection Participants attack a vulnerable chatbot to extract system prompts, bypass safeguards, and trigger unsafe outputs. Lab time: 45 minutes

Defense Strategies

  • Prompt hardening and delimiters
  • Output filtering and semantic validation
  • Sandboxing and containment

Hands-On Lab – Defending Against Prompt Injection Participants harden the chatbot and re-run attacks to validate defenses. Lab time: 30 minutes

Lunch Break : 12:45 – 1:45
Sensitive Data Disclosure and RAG Supply Chain Attacks 1:45 – 3:15

Protecting secrets in prompts, outputs, and retrieval pipelines.

  • Sensitive data in LLM context
  • Leakage via outputs and tool responses
  • Data poisoning and malicious embeddings
  • Compromised knowledge bases

Hands-On Lab – Extracting Secrets from LLM Responses Participants exploit an LLM system with access to sensitive data to force disclosure. Lab time: 45 minutes

Defense Strategies

  • Input sanitization
  • Output filtering and PII detection
  • Access control and data classification

Hands-On Lab – Implementing Output Filtering Participants deploy controls to prevent data leakage. Lab time: 30 minutes

Break : 3:15 – 3:30
The Gauntlet: OWASP LLM Top 10 Rapid-Fire Challenges 3:30 – 5:00

Live attack-and-defense walkthroughs.

  • Data and model poisoning
  • Improper output handling
  • Excessive agency (preview for Day 2)
  • System prompt leakage
  • Vector and embedding weaknesses
  • Misinformation and hallucination risks

Hands-On Lab – Extracting Secrets from LLM Responses Participants exploit an LLM system with access to sensitive data to force disclosure. Lab time: 45 minutes

Case Study & Day 1 Debrief 5:00 – 5:30
  • Real-world LLM compromise analysis
  • Key takeaway: the cognitive core is inherently untrusted
Day 2 – Securing AI Agents: Autonomy, Tool Abuse, and Defense 9:00 AM – 5:30 PM
Recap and Phase Transition 9:00 – 9:15
  • From LLM responders to autonomous actors.
Introduction to AI Agents and Agent Frameworks 9:15 – 10:45

Understanding agent loops and decision-making.

  • What makes an AI system an agent
  • Sense–Plan–Act loops
  • Overview of agent frameworks
  • Prompting vs tool invocation

Hands-On Lab – Building a Simple AI Agent Participants build an agent that selects and invokes tools. Lab time: 30 minutes

Break : 10:45 – 11:00
Agentic RAG and Workflow Design 11:00 – 12:30
  • Iterative retrieval and multi-step reasoning
  • Risks of agentic RAG
  • Sensitive data exposure via chained actions

Hands-On Lab – Secure Agentic RAG Participants build and harden an agentic RAG workflow. Lab time: 45 minutes

Lunch Break : 12:30 – 1:30
Threat Modeling AI Agents1:30 – 3:00

Adapting threat modeling for autonomous systems.

  • Assets, trust boundaries, and abuse cases
  • Excessive agency as a design failure
  • Story-driven threat modeling

Hands-On Lab – AI Agent Threat Modeling Participants threat model an AI agent workflow using assisted techniques.Lab time: 45 minutes

Break : 3:00 – 3:15
Attacking AI Agents: Red Team Perspective 3:15 – 4:45
  • Prompt injection in agents
  • Tool misuse and API abuse
  • Decision manipulation and infinite loops
  • Reflection attacks

Hands-On Lab – Red Team an AI Agent Participants exploit a vulnerable agent to extract secrets and misuse tools.Lab time: 60 minutes

Day 2 Debrief 4:45 – 5:30
  • Why autonomy turns bugs into incidents
  • Preparing for ecosystem-level attacks
Day 3 – MCP, Tool Supply Chains, and Production-Grade AI Defense 9:00 AM – 5:00 PM
Understanding the Model Context Protocol (MCP) 9:00 – 10:30
  • MCP architecture: Host, Client, Server
  • MCP as an AI supply chain
  • New trust boundaries
Break - 10:30 – 10:45
Breaking MCP: Attack Techniques 10:45 – 12:30
  • Tool shadowing and impersonation
  • Output poisoning
  • Dependency and update-based attacks

Hands-On Lab – Exploiting MCP Tools Participants exploit insecure MCP integrations. Lab time: 60 minutes

Lunch Break - 12:30 – 1:30
Defending MCP Deployments 1:30 – 3:00
  • Zero-trust tool invocation
  • Tool allowlisting and capability scoping
  • Namespacing and isolation
  • mTLS and authentication
  • Cryptographic verification of tools

Hands-On Lab – Hardening MCP Systems Participants deploy defenses and re-test attacks. Lab time: 75 minutes

Break - 3:00 – 3:15
Production Architectures and Observability 3:15 – 4:45
  • MCP gateway (“AI firewall”) pattern
  • Logging and anomaly detection
  • Incident response for AI systems

Hands-On Lab – Designing a Secure MCP Gateway Participants design a production-grade MCP gateway. Lab time: 60 minutes

Final Wrap-Up and Takeaways 4:45 – 5:00
  • End-to-end attack chains across LLMs, agents, and MCP
  • Secure reference architectures
  • From experimentation to production defense
Who should attend:

This workshop is designed for security and engineering professionals who are building, deploying, or securing AI-driven systems in production environments, including:

  • Application Security Engineers and Analysts
  • DevSecOps and Security Automation Engineers
  • Security Architects and Security Engineers
  • Cloud Security Engineers responsible for LLM platforms and AI services
  • Platform and Infrastructure Engineers managing AI tooling and MCP deployments
  • Red Team and Offensive Security professionals exploring AI-specific attack surfaces
  • Product Security and Engineering leaders responsible for securing AI-powered features

This training is best suited for practitioners who want hands-on, offensive and defensive experience securing LLMs, AI agents, and AI tool ecosystems, rather than high-level or theoretical AI discussions.

What to expect
  • A deep, hands-on exploration of real-world security vulnerabilities in LLM-powered applications, AI agents, and MCP-based tool ecosystems
  • Offensive and defensive labs covering prompt injection, jailbreaking, RAG poisoning, excessive agency, tool misuse, and AI supply-chain attacks
  • Step-by-step guidance on building, exploiting, and hardening LLM applications and autonomous AI agents
  • Practical exposure to production-grade defensive patterns, including prompt hardening, output validation, agent sandboxing, capability scoping, and zero-trust tool invocation
  • Hands-on experience securing Model Context Protocol (MCP) deployments and defending against tool shadowing, impersonation, and malicious updates
  • Cloud-hosted lab environments with realistic scenarios, enabling participants to focus on learning rather than setup
  • Actionable takeaways, reference architectures, and design patterns that can be directly applied to real-world AI deployments
What not to expect
  • A theoretical or academic introduction to artificial intelligence or machine learning
  • Data science–focused model training, tuning, or statistical analysis of ML algorithms
  • Generic prompt-engineering tips presented as security solutions
  • Vendor-specific product demos or sales-driven tooling walkthroughs
  • Fully automated “push-button” security solutions without understanding underlying risks
  • Beginner-level security content or high-level AI overviews without hands-on depth
What Students Should Bring:
  • A laptop with a modern browser
  • Ensure that you don’t have enterprise firewalls restricting access.
What Students Will Be Provided With:
  • Lab instructions and setup guides
  • Slides and speaker notes
  • Hands-on lab environments
  • Code samples and reference architectures
  • 1 Month access to the training portal where they can redo the labs and learn at their own pace.

Trainer

Vishnu Prasad K

Principal DevSecOps Solutions Engineer

we45