Linking pages
- Microsoft’s AI Red Team Has Already Made the Case for Itself | WIRED https://www.wired.com/story/microsoft-ai-red-team/ 1 comment
- From Google To Nvidia, Tech Giants Have Hired Hackers To Break AI Models https://www.forbes.com/sites/rashishrivastava/2023/09/01/ai-red-teams-google-nvidia-microsoft-meta/ 1 comment
- Adversarial machine learning explained: How attackers disrupt AI and ML systems | CSO Online https://www.csoonline.com/article/3664748/adversarial-machine-learning-explained-how-attackers-disrupt-ai-and-ml-systems.html 0 comments
Linked pages
- GitHub - Azure/counterfit: a CLI that provides a generic automation layer for assessing the security of ML models https://github.com/Azure/counterfit/ 0 comments
- GitHub - Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams https://github.com/Trusted-AI/adversarial-robustness-toolbox 0 comments
Related searches:
Search whole site: site:www.microsoft.com
Search title: AI security risk assessment using Counterfit - Microsoft Security Blog
See how to search.