Microsoft Wants to Help Improve Security: What about Its Engineering of Security

August 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft is a an Onion subject when it comes to security. Black hat hackers easily crack any new PC code as soon as it is released. Generative AI adds a new slew of challenges for bad actors but Microsoft has taken preventative measures to protect their new generative AI tools. Wired details how Microsoft has invested in AI security for years, “Microsoft’s AI Red Team Has Already Made The Case For Itself.”

While generative AI aka chatbots aka AI assistants are new for consumers, tech professionals have been developing them for years. While the professionals have experimented with the best ways to use the technology, they have also tested the best way to secure AI.

Microsoft shared that since 2018 it has had a team learning how to attack its AI platforms to discover weaknesses. Known as Microsoft’s AI red team, the group consists of an interdisciplinary team of social engineers, cybersecurity engineers, and machine learning experts. The red team shares its findings with its parent company and the tech industry. Microsoft wants the information known across the tech industry. The team learned that AI security has conceptual differences from typical digital defense so AI security experts need to alter their approach to their work.

“ ‘When we started, the question was, ‘What are you fundamentally going to do that’s different? Why do we need an AI red team?’ says Ram Shankar Siva Kumar, the founder of Microsoft’s AI red team. ‘But if you look at AI red teaming as only traditional red teaming, and if you take only the security mindset, that may not be sufficient. We now have to recognize the responsible AI aspect, which is accountability of AI system failures—so generating offensive content, generating ungrounded content. That is the holy grail of AI red teaming. Not just looking at failures of security but also responsible AI failures.’”

Kumar said it took time to make the distinction and that red team with have a dual mission. The red team’s early work focused on designing traditional security tools. As time passed, the AI read team expanded its work to incorporate machine learning flaws and failures.

The AI red team also concentrates on anticipating where attacks could emerge and developing solutions to counter them. Kumar explains that while the AI red team is part of Microsoft, they work to defend the entire industry.

Whitney Grace, August 24, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta