How to secure AI systems

Deep Dive: AI
Deep Dive: AI
How to secure AI systems

With so many artificial systems claiming “intelligence” available to the public, making sure they do what they’re designed to is of the utmost importance. Dr. Bruce Draper, Program Manager of the Information Innovation Office at DARPA joins us on this bonus episode of Deep Dive: AI to unpack his work in the field and his current role. We have a fascinating chat with Draper about the risks and opportunities involved in this exciting field, and why growing bigger and more involved Open Source communities is better for everyone. Draper introduces us to the Guaranteeing AI Robustness Against Deception (GARD) Project, its main short-term goals and how these aim to mitigate exposure to danger while we explore the possibilities that machine learning offer. We also spend time discussing the agency’s Open Source philosophy and foundation, the AI boom in recent years, why policy making is so critical, the split between academic and corporate contributions, and much more. For Draper, community involvement is critical to spot potential issues and threats. Tune in to hear it all from this exceptional guest! Read the full transcript.

Key points from this episode:

  • The objectives of the GARD project and DARPA’s broader mission.
  • How the Open Source model plays into the research strategy at DARPA.
  • Differences between machine learning and more traditional IT systems.
  • Draper talks about his ideas for ideal communities and the role of stakeholders.
  • Key factors to the ‘extended summer of AI’ we have been experiencing.
  • Getting involved in the GARD Project and how the community makes the systems more secure.
  • The main impetus for the AI community to address these security concerns.
  • Draper explains the complications of safety-critical AI systems.
  • Deployment opportunities and concurrent development for optimum safety.
  • Thoughts on the scope and role of policy makers in the AI security field.
  • The need for a deeper theoretical understanding of possible and present threats.
  • Draper talks about the broader goal of a self-sustaining Open Source community.
  • Plotting the future role and involvement of DARPA in the community.
  • The partners that DARPA works with: academic and corporate.
  • The story of how Draper got involved with the GARD Project and adversarial AI.
  • Looking at the near future for Draper and DARPA.
  • Reflections on the last few years in AI and how much of this could have been predicted.

Links mentioned in this episode:


Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.

This podcast is sponsored by GitHub, DataStax and Google.

No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

The views expressed in this podcast are the personal views of the speakers and are not the views of their employers, the organizations they are affiliated with, their clients or their customers. The information provided is not legal advice. No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Keep up with Open Source

    We’ll never share your details and you can unsubscribe with a click! See our privacy policy.

    Other Episodes

    Why Debian won’t distribute AI models any time soon

    Why Debian won’t distribute AI models any time soon

    Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by...

    Building creative restrictions to curb AI abuse

    Building creative restrictions to curb AI abuse

    Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer's point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie...