Using TensorFlow and Long Short-Term Memory for Visualized Learning

Below are the videos from the TensorFlow New York meetup—sponsored and organized by Altoros on March 8, 2016.
TensorFlow essentials
In his session, Rafal Jozefowicz, a researcher at Google Brain, provided an overview of TensorFlow, focusing on the following:
- The solution’s key features
- TensorFlow core abstractions
- How to assign devices to Ops with TensorFlow
- Predefined / neural net specific Ops
- Visualizing learning with TensorBoard
- How to run a model in production with TensorFlow Serving
- Case study: language modeling
Beyond LSTMs and visualized learning
Keith Davis of Metro-North Railroad provided the hitchhiker’s guide to TensorFlow. He mainly talked about image recognition, reinforcement learning, and Kohonen (self-organizing) maps. He also demonstrated how to implement recurrent neural networks and long short-term memory (LSTM) architecture in TensorFlow.
Fireside chat: TensorFlow adoption
After the talks delivered, Rafal Jozefowicz, Keith Davis, and Brandon Johnson shared their opinion on the following topics:
- What makes TensorFlow stand out in a crowd as a tool?
- How is TensorFlow applied within Google? How can it be used in other organizations?
- How can the community push TensorFlow as a project?
- How to attract more interest to TensorFlow?
- Recommendations for those getting started with TensorFlow
Join our group to stay tuned with the upcoming meetups!
Further reading
- TensorFlow in Action: TensorBoard, Training a Model, and Deep Q-learning
- Monitoring and Visualizing TensorFlow Operations in Real Time with Guild AI
- Text Prediction with TensorFlow and Long Short-Term Memory—in Six Steps
About the speakers