c0c0n 2026

c0c0n is a 19 years old platform that is aimed at providing opportunities to showcase, educate, understand and spread awareness on Information Security, data protection, and privacy...

Venue & Date

c0c0n 3-Day Professional Training

Creating Custom Tooling for Offensive Security after 2025 - Beyond Autopilot Pentesting

Objective

Up until 2025, building exploits and offensive tooling required substantial time and deep engineering knowledge. In 2026, AI automation includes Claude Code and Opus 4.X and similar agents has taken over much of that engineering work, producing results at a pace that was unthinkable two years ago.

But looking under the hood, the real differentiator is not prompting skills or model selection. It comes down to two things: defining precise goals and guidance for the AI, and building custom tools and workflows the AI can actually use. The second point is massively underrated. Many of the impressive results people attribute to "AI" are really about plugging agents into well-designed pre-AI era tooling and scaling its application. The next level involves creating specialized, potentially single-use tool harnesses that make your AI agents genuinely powerful, for vulnerability discovery, variant analysis, PoC development, and scaling attacks beyond standard autopentest slop.

In this training we cover practical offensive security applications of custom AI tooling: finding vulnerabilities, creating PoCs quickly, performing variant analysis, and scaling both human-developed and AI-developed attacks. We address LLM backend options from convincing cloud providers to support legitimate offensive use cases, to running local models with custom-coded agents or standard clients. Critically, we cover verification: AI output differs in type, volume, and reliability, and getting this under control is the single most important skill to develop. By applying verification layers such as tightly controlled breach and attack simulation infrastructure, we can profit from the positive side of AI while keeping the drawbacks in check!

Key Takeaways:

  • Applying custom AI tooling and workflows to practical offensive security use cases
  • Setting up a lab environment for AI-assisted security research
  • Creating custom tools and harnesses for AI agents
  • Defining goals and guidance precisely to get useful results
  • Saving tokens by interweaving AI with classical methods
  • Verifying AI results – manually and automatically

Target Audience:

Pentesters, security researchers, and red teamers with existing offensive security experience who want to effectively integrate AI into their workflow.

Trainers

Markus Vervier

Security Researcher