{"id":31937,"date":"2017-04-11T18:19:11","date_gmt":"2017-04-11T15:19:11","guid":{"rendered":"https:\/\/www.altoros.com\/blog\/?p=31937"},"modified":"2018-08-01T17:07:52","modified_gmt":"2018-08-01T14:07:52","slug":"the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow","status":"publish","type":"post","link":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/","title":{"rendered":"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow"},"content":{"rendered":"<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_79_2 counter-hierarchy ez-toc-counter ez-toc-transparent ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#A_technology_underlying_Google_Translate\" >A technology underlying Google Translate<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#Reading_and_batching_sequence_data\" >Reading and batching sequence data<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#Fully_dynamic_calculation\" >Fully dynamic calculation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#Want_details_Watch_the_video\" >Want details? Watch the video!<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#Further_reading\" >Further reading<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#About_the_expert\" >About the expert<\/a><\/li><\/ul><\/nav><\/div>\n<h3><span class=\"ez-toc-section\" id=\"A_technology_underlying_Google_Translate\"><\/span>A technology underlying Google Translate<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>What is the magic that makes Google products so powerful? At TensorFlow Dev Summit 2017, the attendees learnt about the sequence-to-sequence models that back up language-processing apps like Google Translate. <a href=\"https:\/\/www.linkedin.com\/in\/eugene-brevdo-99a056a\/\" target=\"_blank\">Eugene Brevdo<\/a> of Google Brain discussed the creation of flexible and high-performance sequence-to-sequence models. In his session, Eugene explored:<\/p>\n<ul>\n<li>reading and batching sequence data<\/li>\n<li>the RNN API<\/li>\n<li>fully dynamic calculation<\/li>\n<li>fused RNN cells for optimizations<\/li>\n<li>dynamic decoding<\/li>\n<\/ul>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-v12.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-v12-1024x678.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31946\" \/><\/a><\/center><\/p>\n<p>He exemplified Google Translate, which is now backed up by a neural network, specifically by a sequence-to-sequence model. (Previously, <a href=\"https:\/\/www.linkedin.com\/in\/xiaobing-liu-27990a49\/\" target=\"_blank\">Xiaobing Liu<\/a>, another member of the Google Brain team, talked about automating <a href=\"https:\/\/www.altoros.com\/blog\/enabling-multilingual-neural-machine-translation-with-tensorflow\/\" >conventional phrase-based machine translation<\/a> in this product.) <\/p>\n<p>A sequence-to-sequence model is basically two <a href=\"https:\/\/www.altoros.com\/blog\/recurrent-neural-networks-classifying-diagnoses-with-long-short-term-memory\/\">recurrent neural networks<\/a>: an <em>encoder<\/em> and a <em>decoder<\/em>. The encoder reads in one word or a word piece at a time, creating some intermediate representation. Then, the decoder receives a start token that triggers decoding\u2014also a single word or word piece at time\u2014in the target language. On admitting a token, the decoder feeds it back into the next time step, so that the token and the previous state are used to figure out what to emit next.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-v12.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-v12-1024x678.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31943\" \/><\/a><\/center><\/p>\n<p>What may look quite trivial is more sophisticated in reality.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-sequence-models-12.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-sequence-models-12-1024x571.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31942\" \/><\/a><\/center><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Reading_and_batching_sequence_data\"><\/span>Reading and batching sequence data<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>When scaling up a training model, it\u2019s important to be able to feed in large volumes of  mini batches of variable-length data. For this purpose, the Google Brain team employs the SequenceExample protocol, which is language- and architecture-agnostic data storage format. (Furthermore, there is a <a href=\"https:\/\/developers.google.com\/protocol-buffers\/\" target=\"_blank\">collection<\/a> of such protocol buffers for serializing structured data.)<\/p>\n<p>The <a href=\"https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/train\/SequenceExample\" target=\"_blank\">SequenceExample format<\/a> is designed specifically to store variable-length sequences. The perks of this format are the following:<\/p>\n<ul>\n<li style=\"margin-bottom: 10px;\">It provides efficient storage of multiple sequences per example.<\/li>\n<li style=\"margin-bottom: 10px;\">It supports a variable number of features per time step.<\/li>\n<li style=\"margin-bottom: 10px;\">There is a <a href=\"https:\/\/github.com\/tensorflow\/tensorflow\/blob\/r0.10\/tensorflow\/python\/ops\/parsing_ops.py\" target=\"_blank\">parser<\/a> that reads in a serialized proto string and emits tensors and\/or sparse tensors in accordance with the configuration.<\/li>\n<li style=\"margin-bottom: 10px;\">The SequenceExample protocol is planned to be a \u201cfirst-class citizen\u201d within <a href=\"https:\/\/tensorflow.github.io\/serving\/\" target=\"_blank\">TensorFlow Serving<\/a>. It means that users will be able to generate data both at training and serving phases.<\/li>\n<\/ul>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-batching-sequence-dynamic-padding-v12.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-batching-sequence-dynamic-padding-v12-1024x573.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31941\" \/><\/a><\/center><\/p>\n<p>Once you can read in a sequence at a time, you have to batch it. Padding can be done manually, which is naturally time-consuming. The second option is to allow the batching mechanism to perform the padding. However, you\u2019ll will inevitably come to a point, where the padding queue adjusts all the sequences to the longest one detected. Still, when scaling up the training model, batch sizes are constantly growing, so you\u2019ll be back to wasting space and computation time. What is the right thing to do, then?<\/p>\n<p>One has to create a number of different queues\u2014<em>buckets<\/em>\u2014running in parallel. When you pass a variable-length sequence into this mechanism, it will put it into one of a number of queues. Each queue will only contain sequences of the same length, so there is little padding to do. The function employed behind the process is Bucket-by-Sequence Length\u2014<code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">tf.contrib.training.bucket_by_sequence_length(..., dynamic_pad=True)<\/code>.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-batching-sequence-data-bucketing-v1.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-batching-sequence-data-bucketing-v1-1024x564.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31940\" \/><\/a><\/center><\/p>\n<p>There is also a State Saver used to implement <a href=\"https:\/\/r2rt.com\/styles-of-truncated-backpropagation.html\" target=\"_blank\">Truncated Backpropagation Through Time<\/a>.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-sequence-to-sequence-models-v12.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-sequence-to-sequence-models-v12-1024x678.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31944\" \/><\/a><\/center><\/p>\n<blockquote><p><em>\u201cWhat it means is that you pick a fixed number of time steps you unroll for, and any sequence, which is longer than this number of time steps, gets split up into multiple segments. When you finish your mini batch, any sequences or segments, which are not the completed sequence, the state is saved for you. Some bookkeeping is done in the background, and at the next training iteration, the state is loaded. So, you continue sequence processing with that saved state.\u201d \u2014Eugene Brevdo, Google Brain<\/em><\/p><\/blockquote>\n<p>The function behind is <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">tf.contrib.training.batch_sequences_with_states(...)<\/code>.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-batching-data-truncated-backpropagation-v1.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-keynote-batching-data-truncated-backpropagation-v1-1024x579.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31939\" \/><\/a><\/center><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Fully_dynamic_calculation\"><\/span>Fully dynamic calculation<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Eugene highlighted two TensorFlow-based tools\u2014<a href=\"https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/while_loop\" target=\"_blank\">tf.while_loop<\/a> and <a href=\"https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/TensorArray\" target=\"_blank\">tf.TensorArray<\/a>\u2014that enable users to easily create memory-efficient custom loops, handling sequences of unknown length.<\/p>\n<p><em>tf.while_loop<\/em> helps to create dynamic loops and supports backpropagation. <em>tf.TensorArray<\/em> is employed to read and write \u201cslices\u201d of tensors, supporting gradient backpropagation, as well.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-sequnce-to-sequence-models-keynote-v12.jpg\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/03\/tensorflow-dev-summit-2017-eugene-brevdo-sequnce-to-sequence-models-keynote-v12-1024x678.jpg\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-31945\" \/><\/a><\/center><\/p>\n<p>Later in his talk, Eugene focused on dynamic decoding and fused RNN cells to drive optimization.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Want_details_Watch_the_video\"><\/span>Want details? Watch the video!<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><center><iframe loading=\"lazy\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/RIR_-Xlbp7s\" frameborder=\"0\" allowfullscreen><\/iframe><\/center><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Further_reading\"><\/span>Further reading<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<ul>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/enabling-multilingual-neural-machine-translation-with-tensorflow\/\" target=\"_blank\">Enabling Multilingual Neural Machine Translation with TensorFlow<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/natural-language-processing-and-tensorflow-implementation-across-industries\/\">Natural Language Processing and TensorFlow Implementation Across Industries<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/how-tensorflow-can-help-to-perform-natural-language-processing-tasks\/\">How TensorFlow Can Help to Perform Natural Language Processing Tasks<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/analyzing-text-and-generating-content-with-neural-networks-and-tensorflow\/\">Analyzing Text and Generating Content with Neural Networks and TensorFlow<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"About_the_expert\"><\/span>About the expert<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><small><a href=\"https:\/\/www.linkedin.com\/in\/eugene-brevdo-99a056a\/\" target=\"_blank\">Eugene Brevdo<\/a> is a software engineer on Google&#8217;s Applied Machine Intelligence team. He primarily works on TensorFlow infrastructure, recurrent neural networks, and sequence-to-sequence models. Eugene&#8217;s research interests include variational inference and reinforcement learning, with applications in speech recognition and synthesis, and biomedical time series. You can check out <a href=\"https:\/\/github.com\/ebrevdo\" target=\"_blank\">his GitHub profile<\/a>.<\/small><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A technology underlying Google Translate<\/p>\n<p>What is the magic that makes Google products so powerful? At TensorFlow Dev Summit 2017, the attendees learnt about the sequence-to-sequence models that back up language-processing apps like Google Translate. Eugene Brevdo of Google Brain discussed the creation of flexible and high-performance sequence-to-sequence models. In his [&#8230;]<\/p>\n","protected":false},"author":3,"featured_media":31949,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[214],"tags":[748,749],"class_list":["post-31937","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tutorials","tag-machine-learning","tag-tensorflow"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow | Altoros<\/title>\n<meta name=\"description\" content=\"This recap explains what it takes to read and batch sequence data, as well as which of the TensorFlow-based tools enable fully dynamic calculations.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow | Altoros\" \/>\n<meta property=\"og:description\" content=\"A technology underlying Google Translate What is the magic that makes Google products so powerful? At TensorFlow Dev Summit 2017, the attendees learnt about the sequence-to-sequence models that back up language-processing apps like Google Translate. Eugene Brevdo of Google Brain discussed the creation of flexible and high-performance sequence-to-sequence models. In his [...]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/\" \/>\n<meta property=\"og:site_name\" content=\"Altoros\" \/>\n<meta property=\"article:published_time\" content=\"2017-04-11T15:19:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2018-08-01T14:07:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"640\" \/>\n\t<meta property=\"og:image:height\" content=\"424\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"author\" content=\"Sophia Turol\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sophia Turol\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/\",\"url\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/\",\"name\":\"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow | Altoros\",\"isPartOf\":{\"@id\":\"https:\/\/www.altoros.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif\",\"datePublished\":\"2017-04-11T15:19:11+00:00\",\"dateModified\":\"2018-08-01T14:07:52+00:00\",\"author\":{\"@id\":\"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/58194952af19fe7b2b830846e077a58e\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#primaryimage\",\"url\":\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif\",\"contentUrl\":\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif\",\"width\":640,\"height\":424},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.altoros.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.altoros.com\/blog\/#website\",\"url\":\"https:\/\/www.altoros.com\/blog\/\",\"name\":\"Altoros\",\"description\":\"Insight\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.altoros.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/58194952af19fe7b2b830846e077a58e\",\"name\":\"Sophia Turol\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2019\/05\/trello_card-96x96.jpg\",\"contentUrl\":\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2019\/05\/trello_card-96x96.jpg\",\"caption\":\"Sophia Turol\"},\"description\":\"Sophia Turol is passionate about delivering well-structured articles that cater for picky technical audience. With 3+ years in technical writing and 5+ years in editorship, she enjoys collaboration with developers to create insightful, yet intelligible technical tutorials, overviews, and case studies. Sophie is enthusiastic about deep learning solutions\u2014TensorFlow in particular\u2014and PaaS systems, such as Cloud Foundry.\",\"url\":\"https:\/\/www.altoros.com\/blog\/author\/sophie-turol\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow | Altoros","description":"This recap explains what it takes to read and batch sequence data, as well as which of the TensorFlow-based tools enable fully dynamic calculations.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/","og_locale":"en_US","og_type":"article","og_title":"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow | Altoros","og_description":"A technology underlying Google Translate What is the magic that makes Google products so powerful? At TensorFlow Dev Summit 2017, the attendees learnt about the sequence-to-sequence models that back up language-processing apps like Google Translate. Eugene Brevdo of Google Brain discussed the creation of flexible and high-performance sequence-to-sequence models. In his [...]","og_url":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/","og_site_name":"Altoros","article_published_time":"2017-04-11T15:19:11+00:00","article_modified_time":"2018-08-01T14:07:52+00:00","og_image":[{"width":640,"height":424,"url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif","type":"image\/gif"}],"author":"Sophia Turol","twitter_misc":{"Written by":"Sophia Turol","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/","url":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/","name":"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow | Altoros","isPartOf":{"@id":"https:\/\/www.altoros.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#primaryimage"},"image":{"@id":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#primaryimage"},"thumbnailUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif","datePublished":"2017-04-11T15:19:11+00:00","dateModified":"2018-08-01T14:07:52+00:00","author":{"@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/58194952af19fe7b2b830846e077a58e"},"breadcrumb":{"@id":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#primaryimage","url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif","contentUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2017\/04\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow.gif","width":640,"height":424},{"@type":"BreadcrumbList","@id":"https:\/\/www.altoros.com\/blog\/the-magic-behind-google-translate-sequence-to-sequence-models-and-tensorflow\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.altoros.com\/blog\/"},{"@type":"ListItem","position":2,"name":"The Magic Behind Google Translate: Sequence-to-Sequence Models and TensorFlow"}]},{"@type":"WebSite","@id":"https:\/\/www.altoros.com\/blog\/#website","url":"https:\/\/www.altoros.com\/blog\/","name":"Altoros","description":"Insight","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.altoros.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/58194952af19fe7b2b830846e077a58e","name":"Sophia Turol","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2019\/05\/trello_card-96x96.jpg","contentUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2019\/05\/trello_card-96x96.jpg","caption":"Sophia Turol"},"description":"Sophia Turol is passionate about delivering well-structured articles that cater for picky technical audience. With 3+ years in technical writing and 5+ years in editorship, she enjoys collaboration with developers to create insightful, yet intelligible technical tutorials, overviews, and case studies. Sophie is enthusiastic about deep learning solutions\u2014TensorFlow in particular\u2014and PaaS systems, such as Cloud Foundry.","url":"https:\/\/www.altoros.com\/blog\/author\/sophie-turol\/"}]}},"_links":{"self":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/31937","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/comments?post=31937"}],"version-history":[{"count":5,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/31937\/revisions"}],"predecessor-version":[{"id":31999,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/31937\/revisions\/31999"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/media\/31949"}],"wp:attachment":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/media?parent=31937"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/categories?post=31937"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/tags?post=31937"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}