When hackers take on AI: Sci-fi – or the future?

Deep Dive: AI
Deep Dive: AI
When hackers take on AI: Sci-fi – or the future?
/

Because we lack a fundamental understanding of the internal mechanisms of current AI models, today’s guest has a few theories about what these models might do when they encounter situations outside of their training data, with potentially catastrophic results. Tuning in, you’ll hear from Connor Leahy, who is one of the founders of Eleuther AI, a grassroots collective of researchers working to open source AI research. He’s also Founder and CEO of Conjecture, a startup that is doing some fascinating research into the interpretability and safety of AI. We talk more about this in today’s episode, with Leahy elaborating on some of the technical problems that he and other researchers are running into and the creativity that will be required to solve them. We also take a look at some of the nefarious ways that he sees AI evolving in the future and how he believes computer security hackers could contribute to mitigating these risks without curbing technological progress. We close on an optimistic note, with Leahy encouraging young career researchers to focus on the ‘massive orchard’ of low-hanging fruit in interpretability and AI safety and sharing his vision for this extremely valuable field of research.

To learn more, make sure not to miss this fascinating conversation with EleutherAI Founder, Connor Leahy! Full transcript. 

Key Points From This Episode:

  • The true story of how EleutherAI started as a hobby project during the pandemic.
  • Why Leahy believes that it’s critical that we understand AI technology.
  • The importance of making AI more accessible to those who can do valuable research.
  • What goes into building a large model like this: data, engineering, and computing.
  • Leahy offers some insight into the truly monumental volume of data required to train these models and where it is sourced from.
  • A look at Leahy ‘s (very specific) perspective on making EleutherAI’s models public.
  • Potential consequences of releasing these models; will they be used for good or evil?
  • Some of the nefarious ways in which Leahy sees AI technology evolving in the future.
  • Mitigating the risks that AI poses; how we can prevent these systems from spinning out of control without curbing progress.
  • Focusing on solvable technical problems to build systems with embedded safeguards.
  • Why Leahy wishes more computer security hackers would work on AI problems.
  • Low-hanging fruit in interpretability and AI safety for young career researchers.
  • Why Leahy is optimistic about understanding these problems better going forward.
  • The creativity required to come up with new ways of thinking about these problems.
  • In closing, Leahy encourages listeners to take a shot at linear algebra, interpretability, and understanding neural networks.

Links Mentioned in Today’s Episode:

Credits

Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.

This podcast is sponsored by GitHub, DataStax and Google.

No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

The views expressed in this podcast are the personal views of the speakers and are not the views of their employers, the organizations they are affiliated with, their clients or their customers. The information provided is not legal advice. No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Keep up with Open Source

    We’ll never share your details and you can unsubscribe with a click! See our privacy policy.

    Other Episodes

    Why Debian won’t distribute AI models any time soon

    Why Debian won’t distribute AI models any time soon

    Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by...

    Building creative restrictions to curb AI abuse

    Building creative restrictions to curb AI abuse

    Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer's point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie...