Date: Fri, 03 Nov 2017 15:00:00 +0000
<p>In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael's doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems.</p> <p class="p1"><span class="s1">When we look at machine learning algorithms they are almost like meta-algorithms in some sense. For example, given a machine learning algorithm, it will look at some data and build some model, and it’s going to behave presumably very differently under different inputs. But does that mean we need new analytical tools? Or is a machine learning algorithm just the same thing as any deterministic algorithm, but just a little bit more tricky to figure out anything complexity-wise? In other words, is there some overlap between the good old-fashioned analysis of algorithms with the analysis of machine learning algorithms from a complexity viewpoint? And what is the difference between strategies for determining the complexity bounds on samples versus algorithms?</span></p> <p class="p1"><span class="s1">A big area of machine learning (and in the analysis of learning algorithms in general) Michael and Kyle discuss is the topic known as complexity regularization. Complexity regularization asks: How should one measure the goodness of fit and the complexity of a given model? And how should one balance those two, and how can one execute that in a scalable, efficient way algorithmically? From this, Michael and Kyle discuss the broader picture of why one should care whether a learning algorithm is efficiently learnable if it's learnable in polynomial time.</span></p> <p class="p1"><span class="s1">Another interesting topic of discussion is the difference between sample complexity and computational complexity. An active area of research is how one should regularize their models so that they're balancing the complexity with the goodness of fit to fit their large training sample size.</span></p> <p class="p1"><span class="s1">As mentioned, a good resource for getting started with correlated equilibria is: <a href="https://dataskeptic.libsyn.com/As%20mentioned,%20a%20good%20resource%20for%20getting%20started%20with%20correlated%20equilibria%20is:%20[https:/www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf"> https://www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf</a></span></p> <p>Thanks to our sponsors:</p> <p><a href="http://mendoza.nd.edu/dataskeptics/">Mendoza College of Business</a> - Get your Masters of Science in Business Analytics from Notre Dame.</p> <p><a href="https://brilliant.org/dataskeptics">brilliant.org</a> - A fun, affordable, online learning tool. Check out their Computer Science Algorithms course.</p>