Why Debian won’t distribute AI models any time soon

Deep Dive: AI
Deep Dive: AI
Why Debian won't distribute AI models any time soon
/

Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by big data and hardware capacity and shares some key insights into what sets AlphaGo apart from previous algorithms, making applications integral, and the necessity of releasing training data along with any free software. You’ll also learn about validation data and the difference powerful hardware makes, as well as why Debian is so strict about their practice of offering free software. Finally, Mo shares his predictions for the free software community (and what he would like to see happen in an ideal world) before sharing his own plans for the future, which include a strong element of research.

If you’re looking to learn about the uphill climb for open source artificial intelligence, plus so much more, you won’t want to miss this episode! Full transcript

Key points from this episode:

  • Background on today’s guest, Mo Zhou: PhD student and Debian developer.
  • His recent Machine Learning Policy proposal at Debian.
  • Defining artificial intelligence and its evolution, driven by big data and hardware capacity.
  • Why the recent advancements in deep learning would be impossible without hardware. 
  • Where AlphaGo differs from past algorithms.
  • The role of data, training code, and inference code in making an application integral.
  • Why you have to release training data with any free software.
  • The financial and time expense of classifying images.
  • What you need access to in order to modify an existing model.
  • The validation data set collected by the research community.
  • Predicting the process of retraining.
  • What you can gain from powerful hardware.
  • Why Debian is so strict in the practice of free software. 
  • Problems that occur when big companies charge for their ecosystems.
  • What Zhou is expecting from the future of the free software community.
  • Which licensing schemes are most popular and why.
  • An ideal future for Open Source AI.
  • Zhou’s plans for the future and why they include research.

Links mentioned in today’s episode:

Credits

Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.

This podcast is sponsored by GitHub, DataStax and Google.

No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

The views expressed in this podcast are the personal views of the speakers and are not the views of their employers, the organizations they are affiliated with, their clients or their customers. The information provided is not legal advice. No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Keep up with Open Source

    We’ll never share your details and you can unsubscribe with a click! See our privacy policy.

    Other Episodes

    Building creative restrictions to curb AI abuse

    Building creative restrictions to curb AI abuse

    Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer's point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie...

    When hackers take on AI: Sci-fi – or the future?

    When hackers take on AI: Sci-fi – or the future?

    Because we lack a fundamental understanding of the internal mechanisms of current AI models, today’s guest has a few theories about what these models might do when they encounter situations outside of their training data, with potentially catastrophic results. Tuning...