On Generative AI Security

On Generative AI Security

Microsoft’s AI Red Team just published “Lessons from
Red Teaming 100 Generative AI Products
.” Their blog post lists “three takeaways,” but the eight lessons in the report itself are more useful:

  1. Understand what the system can do and where it is applied.
  2. You don’t have to compute gradients to break an AI system.
  3. AI red teaming is not safety benchmarking.
  4. Automation can help cover more of the risk landscape.
  5. The human element of AI red teaming is crucial.
  6. Responsible AI harms are pervasive but difficult to measure.
  7. LLMs amplify existing security risks and introduce new ones.
  8. The work of securing AI systems will never be complete.

Contact Us


    Please use this form to contact us or email us at [email protected]

    Address

    Singapore CBD

    Phone-no

    +65 8714 2780