Date: Fri, 16 Jan 2015 04:04:05 +0000
<p><span style="color: #224422; font-family: 'Lucida Bright', Georgia, serif; font-size: medium;"> My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the </span><a href="http://www.evolvingai.org/" style="font-family: 'Lucida Bright', Georgia, serif; font-size: medium;">Evolving AI lab</a><span style="color: #224422; font-family: 'Lucida Bright', Georgia, serif; font-size: medium;">. The episode discusses the paper </span><a href="http://arxiv.org/pdf/1412.1897v2.pdf" style="font-family: 'Lucida Bright', Georgia, serif; font-size: medium;">Deep Neural Networks are Easily Fooled [pdf]</a><span style="color: #224422; font-family: 'Lucida Bright', Georgia, serif; font-size: medium;"> by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that a trained deep neural network will mis-classify. If you have a deep neural network that has been trained to recognize certain types of objects in images, these "fooling" images can be constructed in a way which the network will mis-classify them. To a human observer, these fooling images often have no resemblance whatsoever to the assigned label. Previous work had shown that some images which appear to be unrecognizable white noise images to us can fool a deep neural network. This paper extends the result showing abstract images of shapes and colors, many of which have form (just not the one the network thinks) can also trick the network.</span></p>