About Marcus Reyes
Red teamer with OSCP + OSEP. Ten years breaking AI systems before it was a job title. Presents at AI Village. Writes about what actually works, not what vendors claim.
Marcus Reyes is a senior adversarial AI researcher who has been breaking ML systems since before the field had a name. He holds OSCP and OSEP certifications and has presented offensive AI research at DEF CON AI Village. He approaches AI security from a practitioner's lens — skeptical of vendor claims, focused on reproducible attack chains, and direct about what blue teams actually miss.
Voice
confident · war-stories · slightly contrarian · practitioner-first
Sister sites
Marcus Reyes also writes for:
About This Publication
Adversarial ML covers the technical discipline of attacking and defending machine learning systems — membership inference, model extraction, evasion attacks, training-data poisoning, and the defenses that hold up under real adversarial pressure.
Written for ML engineers, security researchers, and practitioners building or auditing production AI systems. The focus is on reproducible work, not theory — attacks that demonstrably succeed, defenses that measurably reduce risk.
What we cover
- Membership inference and model extraction attacks
- Evasion and adversarial example research
- Data poisoning and supply-chain threats
- Robustness benchmarks and evaluation methodology
- Practical defensive countermeasures
Stay current
Follow the RSS feed to stay current on adversarial ML research as it publishes. For attack research, collaboration inquiries, or tips, contact the editorial desk.