Adversarial ML
Your morning AI security briefing.

Working adversarial ML — exploits, defenses, and the gap between.

Adversarial ML coverage for engineers shipping ML systems. Membership inference, model extraction, evasion attacks, training-data extraction, backdoors — focused on what's exploitable against deployed models and what defenders can actually do about it. PoCs against open models, behavioral analysis for closed ones.

Lead

Certified Robustness via Randomized Smoothing: What 'Certified' Actually Guarantees

Randomized smoothing gives you a provable robustness radius. Understanding what that certificate means in practice — and where it breaks — is more useful than the headline number.

Read briefing
Certified robustness radius visualization with randomized smoothing

Today's briefing

Subscribe

Adversarial ML — in your inbox

Working adversarial ML — exploits, defenses, and the gap between. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.