Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.
The group responsible for red teaming of over 100 generative AI products at Microsoft has concluded that the work of building ...
Microsoft’s AI red team was established in 2018 to address the evolving landscape of AI safety and security risks. The team ...
Microsoft's red team uncovers AI vulnerabilities by thinking like adversaries to strengthen generative AI systems.
Microsoft AI Red Team has lessons and case studies for MSSPs and cybersecurity professionals to heed around artificial intelligence, machine learning, LLMs.
According to a whitepaper from Redmond’s AI red team, tools like its open source PyRIT (Python Risk Identification Toolkit) ...
Scientists have warned that artificial intelligence (AI) has crossed a "red line," as the technology now has the ability to ...
Affiliate Ram Shankar Siva Kumar and coauthors suggest ways that Microsoft and other tech giants can mitigate the security risks inherent in emerging AI technologies.
Companies are feeling pressure to adopt generative AI to stay ahead in a competitive global market. To ensure they adopt responsibly, governing and regulatory bodies across the world are convening ...
While automation is a valuable tool for scaling red teaming efforts, it cannot replace human judgment and creativity. Subject ...