c0c0n 2026

c0c0n is a 19 years old platform that is aimed at providing opportunities to showcase, educate, understand and spread awareness on Information Security, data protection, and privacy...

Venue & Date

c0c0n 3-Day Professional Training

AI Security Engineering Masterclass

Objective

We take a Build, Break, Defend approach to learn about AI Security. We start from understanding how full stack AI Apps and agents are built; dive into concepts like MCPs, A2A, Agentic architecture, RAG, Vector DBs and build a full stack security agent.

Once you understand how these systems work, we shift to the Break phase - uncovering real-world security vulnerabilities through hands-on labs using custom-built agents and AI apps we built for the course.

Finally, we move to Defend, where you’ll fix the vulnerabilities in code or architecture.

Course Description

Before you can secure or break AI apps, you need to understand how it’s built. Below is our Build, Break, Defend approach.

Build - Full Stack Security Agent

Our AI Security Training is a hands-on training focused on learning AI security from first principles and with an engineering mindset. We heavily focus on building a fundamental understanding of how real-world GenAI applications are built, based on our experience working with AI-native engineering teams.

We will use hands-on labs to interact with LLM APIs, then going deep into embeddings, VectorDBs, RAG, Agentic systems, MCPs, Langsmith etc and essential tooling around them—all with real-world examples and labs.

Once we understand do labs around these concepts, we will actually go ahead and build our own threat model agent.

Break - Offensive technique, tooling and threat model AI Stack

We then dive into the offensive component with real-world apps in our labs. Some examples of the labs we cover in the course include:

  • Diving into classic Prompt Injection and Indirect Prompt Injection attacks using our Email Assistant bot we've built
  • Sensitive Information Disclosure and Authorization issues
  • MCP attacks — We will build MCP Servers (Local/Remote), SSE vs stdio, and then go into MCP attacks using custom MCP servers we built
  • Attacks in Agentic architecture
  • Model Backdoors — Real-world backdoor example from Hugging Face; learn how adversaries embed hidden behavior into AI models
  • Threat Model AI Application workflows and how to think about the application layer when combined with LLMs
Defend - Practical tools and techniques

We will then go over practical techniques—covering both tools and architecture-level thinking on how to secure AI applications:

  • Practical defense techniques using our labs
    • inline LLM guardrails
    • MCP Gateways for observability and detection
  • Go over each attack we demonstrated and fix them at App layer or make architecture changes
  • Agentic Security Architecture
  • We will look at AI Security Tooling and how you can implement them in SDLC - this includes during code generation and implementing tooling to detect bugs at scale

Course Content

Part I – Build
Chapter 1: Intro to course, labs and GenAI
  • Gen AI Intro
    • From Traditional AI/ML Models to Generative AI - How is GenAI different from ML and what has changed?
      • What is Generative AI?
      • What are LLMs? How are Models trained with public data, Neural Networks etc.
      • Open Source vs Closed Source LLMs
    • Real world vertical AI apps - case study of Harvey in legal, Xbow in Offensive Security - This is to show how vertical AI apps are built to solve a problem in a specific industry
    • We will then go ahead and set up our labs for rest of the course
      • How to Use This Course and Set Up Your Lab
      • The Hello World of AI: Build Your First LLM App
  • Labs:
    • Lab 1.1: Setting up your lab environment
    • Lab 1.2: Write a simple full stack AI application
Chapter 2: Leveraging Internal Data in LLMs and creating your own security chatbot
  • We then talk about the problem with LLMs - that they are trained on public data and how we can leverage RAG architecture to provide internal context. We start with Embeddings -> VectorDB -> Retrieval -> Finally create an internal Security Chatbot with RAG architecture
    • A Practical Introduction to Embeddings
    • What are Embeddings?
    • How Embeddings Help LLMs Understand Internal Data
  • Storing Embeddings in Vector Databases
  • Querying Embeddings for AI Applications
Labs:
  • Lab 2.1: Generate embeddings for a small text dataset
  • Lab 2.2: Store embeddings in a vector database (FAISS)
  • Lab 2.3: Perform similarity search against stored embeddings
  • Lab 2.4: Build a basic internal Security Chatbot with RAG architecture
Chapter 3: LangChain and LangSmith
  • We want our students to learn about tools like LangSmith which developers use for observability - this is done intentionally so we introduce the entire AI ecosystem which engineers use, so when conducting security reviews they understand in depth
  • Introduction to LangChain
  • Introduction to LangSmith
  • Introduction to LangSmith
  • Building with LangChain Tools and Workflows
  • Talk about risk with using platforms like LangSmith and data leakage
Labs:
  • Lab 3.1: Build a LangChain pipeline with LLM + prompt templates
  • Lab 3.2: Debug and monitor applications with LangSmith
Chapter 4: Building Agents and creating your own security scanning agent
  • What are Agents? How Does an Agent Work? (Think → Act → React → Observe → Loop)
  • What Are Tools/Function Calls?
    • We will build a simple Tool and integrate with LLM
  • We will then build our web scanning Agent
Labs:
  • Lab 4.1: We will build a security scanning agent
Chapter 5: Build your own AI threat model Agent
  • We now use all the above concepts we have learned to build our AI Threat Model App - App can take a design document and perform a full threat model. We will use RAG architecture to use internal Security Best Practices and enhance the recommendation which LLM gives.
    • Understand the threat model application you are building
    • Components & Data Flow
  • This will also introduce IDEs like Cursor and how they can speed up development
Labs:
  • Lab 5.1: Build End to End threat model tool
Part II – Break
Chapter 6: Attack Surface Overview
  • We will go over the entire LLM ecosystem and show the attack surface possible
  • We then do a threat model exercise on a real world app
    • This is to show what are the practical attacks possible on real world GenAI apps
  • Go over how to approach security assessment, don’t just jump straight into performing a penetration test. Instead, start with detailed threat modeling sessions alongside the engineering team to fully understand what they are building. Once you have that context, map out the potential attack scenarios and then plan your security assessment accordingly.
  • Labs:
    • Lab 6.1: Threat model exercise for HelpDesk AI app - Go over the components, workflows, threats and how to approach a security assessment for AI-first app
Chapter 7: Prompt Injection Labs
  • We built our own Essay Evaluator app for this. We will go over the architecture, discuss how such a flow can be abused using simple prompt injection
Labs:
  • Lab 7.1: Perform direct prompt injection on an essay app
Chapter 8: Authorization Problems in Agentic Architecture
  • We built an internal IT desk application which we will use in the labs. We will show how we can leverage prompt injection style attacks because of bad application flows to get information about another user.
  • We show code-level flaw due to which such attacks are possible. We focus on why application security still matters, since this lab is more of a permission issue on the application layer
Labs:
  • Lab 8.1: Attack an AI support bot
Chapter 9: Indirect Prompt Injection
  • WThis is a really interesting flow we built, where we have a Personal Assistant agent
  • We show how an attacker can embed a malicious prompt in their email and exfiltrate data
  • We will talk about Agentic Browsers like Perplexity and how Indirect Prompt Injection affects them
Labs:
  • Lab 9.1: Exploiting Personal Assistant Agent
Chapter 10: Model Backdoor
  • Real-World Backdoor Example from Hugging Face. We will take a model from Hugging Face, embed our reverse shell payload
  • We will then show how these malicious models can be uploaded to Hugging Face and triggered upon usage
Labs:
  • Lab 10.1: Lab to backdoor Hugging Face model
Chapter 11: Model Context Protocol (MCP) Attacks
  • What is MCP and the problem it’s trying to solve
  • MCP Architecture - Go over Local vs Remote
  • Attack from two personas
    • When you're a Client using an MCP Server
    • When you're a Provider building or exposing an MCP Server
Labs:
  • Lab 11.1: Build Local/ Remote MCP server
  • Lab 11.2: MCP Tool poisoning attack using a malicious currency converter MCP we built
  • Lab 11.3: Rug Pull attack on MCP on the same currency converter MCP
Part III – Defend
Chapter 12: Introduction to LLM Defense
  • We will go over how you should think of security for AI-first applications.
  • We focus on going over AI application flows and controls you can have in each layer. We also emphasize the intersection between AI <> Traditional AppSec controls which we should be thinking about
    • We go over the entire AI ecosystem and controls which can apply in each layer
  • We also ask the students to take a step back and think about AI security from three aspects
    • If you are a product company - think about how AI is used within your products and threat model them
    • How is AI being used for productivity within your company - Think about IDEs like Cursor, MCP Servers etc. - What are your risks around all these and practical controls we can have
    • How can we leverage AI to improve security - For instance - Can we use Claude Sub Agents for Security Code Review, for Threat Modelling etc.
  • Labs:
    • Lab 12.1: We go over all the offensive labs and fix them in code or apply tooling to mitigate those attacks
Chapter 13: Defense Tooling
  • Multi-model architecture to detect prompt injection
  • We dive into MCP gateways and how we can leverage that for MCP risks
  • We explore Model Scanning tooling
  • We explore tools in the offensive space which can be leveraged for testing
Labs:
  • Lab 13.1: Modelscan to detect Model Backdoor
  • Lab 13.2: Use a multi-model approach to detect prompt injection
  • Lab 13.3: Implement MCP gateway for observability and detection
Chapter 14: AI Security Tooling in SDLC
  • How Software is built now with Agentic IDE
  • AI Threat Model Tool we built and how to leverage that for design reviews
  • How to generate secure code when using these modern agentic IDE
  • Implementing bots for scanning during PR
Labs:
  • Lab 14.1: End to End deployment of AI Tooling to secure different phases of SDLC
Chapter 15: MLSecOps
  • Understanding the AI Supply Chain - Comparing AI and Traditional Supply Chains
  • Mapping the AI Supply Chain Attack Surface
  • Securing the AI Supply Chain
  • Downloading, Training, and Inference with a Hugging Face Model
  • Integrating MLFlow - understand how it fits in the ML lifecycle
  • Model Signing, AIBOM,Dependency Pinning
Labs:
  • Build an End - End AI Security Pipeline
Pre-requisite

  • Familiarity with the Python programming language and being able to write simple scripts.
  • Background in machine learning is not required.

Participant's Requirements

  • Laptop with access to the internet.
    • Hardware Requirements
      • Laptop: Personal laptop recommended (with admin privileges)
      • Memory: 16 GB RAM or higher
      • Storage: Minimum 10 GB free space

What the Trainer Will Provide:

  • Lab environment: Access to custom-built labs
  • API keys to interact with models - students don't need their own subscriptions
  • Course material: Slides, Videos
  • Certificate of completion

Trainers

Harish Ramadoss

Founding Member

Rippling