Adversarial Robustness 360
The open source Adversarial Robustness Toolbox provides tools that enable developers and researchers to evaluate and defend machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference.
Not sure what to do first? Start here!
Learn about adversarial robustness concepts, terminology, and tools before you begin.
Try Web Demos
Explore interactive web demos that illustrate the capabilities available in this toolbox.
Watch videos to learn more about the Adversarial Robustness Toolbox.
Read a Paper
Read a paper describing the details of the Adversarial Robustness Toolbox.
Step through a set of in-depth examples to evaluate and defend AI models.
Ask a Question
Visit our Adversarial Robustness Toolbox Slack to ask questions, make comments, and tell stories about how you use the toolbox.
Open a directory of Jupyter notebooks in GitHub that provide working examples of adversarial robustness. Then share your own notebooks!
You can add new algorithms and metrics in GitHub. Share Jupyter notebooks showcasing how you have tested vulnerabilities of your machine learning application.