Building creative restrictions to curb AI abuse

Deep Dive: AI logo
Deep Dive: AI
Building creative restrictions to curb AI abuse
Loading
/

Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer’s point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie Mellon University and is investigating AI from an ethical perspective, honing in specifically on the ethics-related challenges faced by AI software engineers. His research has been conducted at Intel Labs, Microsoft, and NASA’s Jet Propulsion Lab. In this episode, we discuss the harmful uses of deep fakes and the ethical ramifications thereof in proprietary versus open source contexts. Widder breaks down the notions of technological inevitability and technological neutrality, respectively, and explains the importance of challenging these ideas. Widder has identified a continuum between implementation-based harms and use-based harms and fills us in on how each is affected in the open source development space.

Tune in to find out more about the importance of curbing AI abuse and the creativity required to do so, as well as the strengths and weaknesses of open source in terms of AI ethics. Full transcript.

Key points from this episode:

  • Introducing David Gray Widder, a Ph.D. student researching AI ethics.
  • Why he chose to focus his research on ethics in AI, and how he drives his research.
  • Widder explains deep fakes and gives examples of their uses.
  • Sinister uses of deep fakes and the danger thereof.
  • The ethical ramifications of deep fake tech in proprietary versus open source contexts.
  • The kinds of harms that can be prevented in open source versus proprietary contexts.
  • The licensing issues that result in developers relinquishing control (and responsibility) over the uses of their tech.
  • Why Widder is critical of the notions of both technological inevitability and neutrality.
  • Why it’s important to challenge the idea of technological neutrality.
  • The potential to build restrictions, even within the dictates of open source.
  • The continuum between implementation-based harms and use-based harms.
  • How open source allows for increased scrutiny of implementation harms, but decreased accountability for use-based harms.
  • The insight Widder gleaned from observing NASA’s use of AI, pertaining to the deep fake case.
  • Widder voices his legal concerns around Copilot.
  • The difference between laws and norms.
  • How we’ve been unsuspectingly providing data by uploading photos online.
  • Why it’s important to include open source and public sector organizations in the ethical AI conversation.
  • Open source strengths and weaknesses in terms of the ethical use of AI.

Links mentioned in today’s episode:

Credits

Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.

This podcast is sponsored by GitHub, DataStax and Google.

No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

The views expressed in this podcast are the personal views of the speakers and are not the views of their employers, the organizations they are affiliated with, their clients or their customers. The information provided is not legal advice. No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Keep up with Open Source

    We’ll never share your details and you can unsubscribe with a click! See our privacy policy.

    Other Episodes

    How to secure AI systems

    How to secure AI systems

    How DARPA is building tools and a community to secure the artificial intelligence systems and prevent nightmare scenarios.

    Why Debian won’t distribute AI models any time soon

    Why Debian won’t distribute AI models any time soon

    Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by...