Linking pages
- How does AI fail ? - by Thierry Decroix https://int3.substack.com/p/how-does-ai-fail 1 comment
- Q1 & Q2, 2024 Update: A Comprehensive Guide for GenAI Safety and Security https://securedgenai.substack.com/p/q1-and-q2-2024-update-a-comprehensive 1 comment
- Secure your machine learning with Semgrep | Trail of Bits Blog https://blog.trailofbits.com/2022/10/03/semgrep-maching-learning-static-analysis/ 0 comments
- GitHub - jphall663/awesome-machine-learning-interpretability: A curated list of awesome machine learning interpretability resources. https://github.com/jphall663/awesome-machine-learning-interpretability 0 comments
- GitHub - EthicalML/awesome-production-machine-learning: A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning https://github.com/EthicalML/awesome-production-machine-learning 0 comments
- AI security risk assessment using Counterfit - Microsoft Security Blog https://www.microsoft.com/security/blog/2021/05/03/ai-security-risk-assessment-using-counterfit/ 0 comments
- Adversarial machine learning explained: How attackers disrupt AI and ML systems | CSO Online https://www.csoonline.com/article/3664748/adversarial-machine-learning-explained-how-attackers-disrupt-ai-and-ml-systems.html 0 comments
- How to start Penetration testing of Artificial Intelligence | by Taimur Ijlal | InfoSec Write-ups https://infosecwriteups.com/how-to-start-penetration-testing-of-artificial-intelligence-c11e97b77dfa 0 comments
- GitHub - jiep/offensive-ai-compilation: A curated list of useful resources that cover Offensive AI. https://github.com/jiep/offensive-ai-compilation 0 comments
Linked pages
- Microsoft Azure https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns 187 comments
- Latest supported Visual C++ Redistributable downloads | Microsoft Learn https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads 15 comments
- GitHub - facebookresearch/AugLy: A data augmentations library for audio, image, text, and video. https://github.com/facebookresearch/AugLy 2 comments
- GitHub - Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams https://github.com/Trusted-AI/adversarial-robustness-toolbox 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - Azure/counterfit: a CLI that provides a generic automation layer for assessing the security of ML models
See how to search.