seq2seq

Data Skeptic

Episode | Podcast

Date: Fri, 01 Mar 2019 16:00:00 +0000

<p>A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.</p> <p>The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.</p> <p>In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.</p> <p><strong>Related Links</strong></p> <ul> <li> <p><a href="https://google.github.io/seq2seq/">tf-seq2seq</a></p> </li> <li> <p><a href="https://arxiv.org/abs/1507.01053v1">Describing Multimedia Content using Attention-based Encoder--Decoder Networks</a></p> </li> <li> <p><a href="https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Vinyals_Show_and_Tell_2015_CVPR_paper.pdf"> Show and Tell: A Neural Image Caption Generator</a></p> </li> <li> <p><a href="https://arxiv.org/pdf/1704.06485.pdf">Attend to You: Personalized Image Captioning with Context Sequence Memory Networks</a></p> </li> </ul>