Date: Tue, 29 Jun 2021 21:00:00 -0400
<div class="wp-block-jetpack-markdown"><h2>Summary</h2> <p>Deep learning has largely taken over the research and applications of artificial intelligence, with some truly impressive results. The challenge that it presents is that for reasonable speed and performance it requires specialized hardware, generally in the form of a dedicated GPU (Graphics Processing Unit). This raises the cost of the infrastructure, adds deployment complexity, and drastically increases the energy requirements for training and serving of models. To address these challenges Nir Shavit combined his experiences in multi-core computing and brain science to co-found Neural Magic where he is leading the efforts to build a set of tools that prune dense neural networks to allow them to execute on commodity CPU hardware. In this episode he explains how sparsification of deep learning models works, the potential that it unlocks for making machine learning and specialized AI more accessible, and how you can start using it today.</p> <h2>Announcements</h2> <ul> <li>Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.</li> <li>When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to <a href="https://www.pythonpodcast.com/linode?utm_source=rss&utm_medium=rss">pythonpodcast.com/linode</a> and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!</li> <li>Your host as usual is Tobias Macey and today I’m interviewing Nir Shavit about Neural Magic and the benefits of using sparsification techniques for deep learning models</li> </ul> <h2>Interview</h2> <ul> <li>Introductions</li> <li>How did you get introduced to Python?</li> <li>Can you describe what Neural Magic is and the story behind it?</li> <li>What are the attributes of deep learning architectures that influence the bias toward GPU hardware for training them? <ul> <li>What are the mathematical aspects of neural networks that have biased the current generation of software tools toward that architectural style?</li> </ul> </li> <li>How does sparsifying a network architecture allow for improved performance on commodity CPU architectures?</li> <li>What is involved in converting a dense neural network into a sparse network?</li> <li>Can you describe the components of the Neural Magic architecture and how they are used together to reduce the footprint of deep learning architectures and accelerate their performance on CPUs? <ul> <li>What are some of the goals or design approaches that have changed or evolved since you first began working on the Neural Magic platform?</li> </ul> </li> <li>For someone who has an existing model defined, what is the process to convert it to run with the DeepSparse engine?</li> <li>What are some of the options for applications of deep learning that are unlocked by enabling the models to train and run without GPU or other specialized hardware?</li> <li>The current set of components for Neural Magic is either open source or free to use. What is your long-term business model, and how are you approaching governance of the open source projects?</li> <li>What are the most interesting, innovative, or unexpected ways that you have seen Neural Magic and model sparsification used?</li> <li>What are the most interesting, unexpected, or challenging lessons that you have learned while working on Neural Magic?</li> <li>When is Neural Magic or sparse networks the wrong choice?</li> <li>What do you have planned for the future of Neural Magic?</li> </ul> <h2>Keep In Touch</h2> <ul> <li><a href="https://www.csail.mit.edu/person/nir-shavit?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Research Overview</a></li> <li><a href="https://www.linkedin.com/in/nir-shavit-1b8b4412/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">LinkedIn</a></li> </ul> <h2>Picks</h2> <ul> <li>Tobias <ul> <li><a href="https://www.imdb.com/title/tt5540054/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">The Tick</a> TV show</li> </ul> </li> <li>Nir <ul> <li><a href="https://www.openculture.com/2019/02/bauhaus-world.html?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Bauhaus</a> documentary</li> </ul> </li> </ul> <h2>Closing Announcements</h2> <ul> <li>Thank you for listening! Don’t forget to check out our other show, the <a href="https://feeds.fireside.fm/pythonpodcast/rss">Data Engineering Podcast</a> for the latest on modern data management.</li> <li>Visit the <a href="https://www.pythonpodcast.com?utm_source=rss&utm_medium=rss">site</a> to subscribe to the show, sign up for the mailing list, and read the show notes.</li> <li>If you’ve learned something or tried out a project from the show then tell us about it! Email <a href="mailto:hosts@podcastinit.com">hosts@podcastinit.com</a>) with your story.</li> <li>To help other people find the show please leave a review on <a href="https://itunes.apple.com/us/podcast/podcast.-init/id981834425?mt=2&uo=6&at=&ct=&utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">iTunes</a> and tell your friends and co-workers</li> <li>Join the community in the new Zulip chat workspace at <a href="https://www.pythonpodcast.com/chat?utm_source=rss&utm_medium=rss">pythonpodcast.com/chat</a></li> </ul> <h2>Links</h2> <ul> <li><a href="https://neuralmagic.com/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Neural Magic</a></li> <li><a href="https://web.mit.edu/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">MIT</a></li> <li><a href="https://en.wikipedia.org/wiki/Computational_neuroscience?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Computational Neurobiology</a></li> <li><a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">6.006</a> MIT Course</li> <li><a href="https://en.wikipedia.org/wiki/FLOPS?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">FLOPS == FLoating point OPerations per Second</a></li> <li><a href="https://en.wikipedia.org/wiki/Perceptron?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Perceptron</a></li> <li><a href="https://en.wikipedia.org/wiki/Convolutional_neural_network?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Convolutional Neural Network</a></li> <li><a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Lisp</a></li> <li><a href="https://towardsdatascience.com/how-to-accelerate-and-compress-neural-networks-with-quantization-edfbbabb6af7?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Quantization of ML</a></li> <li><a href="https://towardsdatascience.com/yolo-you-only-look-once-real-time-object-detection-explained-492dc9230006?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">YOLO ML Model</a></li> <li><a href="https://en.wikipedia.org/wiki/Federated_learning?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Federated Learning</a> <ul> <li><a href="https://www.pythonpodcast.com/flower-federated-learning-episode-314/?utm_source=rss&utm_medium=rss">Podcast Episode</a></li> </ul> </li> <li><a href="https://en.wikipedia.org/wiki/Reinforcement_learning?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Reinforcement Learning</a></li> <li><a href="https://en.wikipedia.org/wiki/GPT-3?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">GPT-3</a></li> <li><a href="https://openai.com/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">OpenAI</a></li> <li><a href="https://en.wikipedia.org/wiki/Transfer_learning?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Transfer Learning</a> <ul> <li><a href="https://www.pythonpodcast.com/paul-azunre-transfer-learning-for-natural-language-processing-episode-315/?utm_source=rss&utm_medium=rss">Podcast Episode about Transfer Learning for NLP</a></li> </ul> </li> <li><a href="https://neuralmagic.com/blog/how-neural-magics-deep-sparse-technology-works/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Tensor Columns</a></li> <li><a href="https://docs.neuralmagic.com/deepsparse/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Neural Magic DeepSparse Engine</a></li> <li><a href="https://onnx.ai/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">ONNX</a></li> <li><a href="https://en.wikipedia.org/wiki/CUDA?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">CUDA</a></li> <li><a href="https://github.com/neuralmagic/sparsezoo?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Sparse Zoo</a></li> <li><a href="https://www.tabnine.com/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">Tab9</a></li> </ul> <p>The intro and outro music is from Requiem for a Fish <a href="http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">The Freak Fandango Orchestra</a> / <a href="http://creativecommons.org/licenses/by-sa/3.0/?utm_source=rss&utm_medium=rss" rel="noopener" target="_blank">CC BY-SA</a></p> </div> <img alt="" height="0" src="https://analytics.boundlessnotions.com/piwik.php?idsite=1&rec=1&url=https%3A%2F%2Fwww.pythonpodcast.com%2Fneural-magic-deep-learning-sparse-networks-episode-321%2F&action_name=Lightening+The+Load+For+Deep+Learning+With+Sparse+Networks+Using+Neural+Magic+-+Episode+321&urlref=https%3A%2F%2Fwww.pythonpodcast.com%2Ffeed%2F&utm_source=rss&utm_medium=rss" style="border: 0; width: 0; height: 0;" width="0" />