"The transition from deterministic code to Agentic AI has created a massive security vacuum. I am building the bridges and the guardrails for that future."
I spent the early part of my career as a cybersecurity enthusiast and developer, building in isolation and navigating the "unemployed developer" label. For a long time, I felt like I was catching up.
But 2026 changed the math. The shift to Agentic AIโwhere models don't just talk, but actโhas made traditional security obsolete. I realized my background in security and my obsession with "how things break" wasn't a hobby; it was the foundation for the most critical field of this decade: AI Red Teaming.
I am currently in a self-imposed "AI Safety Residency," deep-diving into the intersection of LLM vulnerabilities and autonomous agents. I don't just build AI; I stress-test it against the world.
- AI Red Teaming: Automated stress-testing for Prompt Injection, Data Poisoning, and Jailbreaking.
- Agentic Workflows: Developing multi-agent systems using LangGraph and MCP (Model Context Protocol).
- AI Governance: Mapping model outputs to OWASP Top 10 for LLMs and NIST AI RMF.
- Defense-in-Depth: Implementing PII masks, output guardrails, and adversarial detection layers.
๐ Featured Project: AI Red Teaming Toolkit (ART-T)
My flagship project designed to automate the discovery of vulnerabilities in LLM-based applications.
- Core Goal: Move beyond "manual prompting" to Automated Adversarial Evaluation.
- Key Tech: Python 3.11, AsyncIO, Vector DB Security, and LLM-as-a-Judge architecture.
- Phase 1: Traditional Cybersecurity Foundations & Automation scripts.
- Phase 2 (Current): Mastering Agentic AI Workflows (DeepLearning.ai Residency).
- Phase 3: Integration of Industry-Standard Evals (Giskard/DeepEval).
- Phase 4: Open-Source Contribution to Global AI Safety Frameworks.
- Languages: Python (Advanced), SQL, Bash, JavaScript, etc.
- Security: Penetration Testing, Blockchain Security, AI Red Teaming.
- AI/ML: RAG 2.0, Prompt Engineering, Agentic Design Patterns.
I am looking to collaborate with teams at the forefront of AI Alignment, Safety, and Engineering. If you believe AI should be as secure as it is intelligent, letโs talk.
- GitHub: @lucasmulato
- LinkedIn: in/lucasmulato
- Status: Open for specialized AI Security / Engineering roles.
"Impostor syndrome is just the gap between where you are and where you refuse to stop."

