Under-the-Hood Mechanisms of Neural Networks with TensorFlow

by Sophia TurolSeptember 1, 2016
Neural networks are gaining their popularity for bringing image recognition and text processing to the next level.


Neural networks are actively applied to improve speech recognition, facial identification, reading emotions, sentiment analysis, disease diagnosis, etc. At the recent TensorFlow meetup in Seattle, the attendees were plunged into the world of convolutional and recurrent neural networks, their under-the-hood mechanisms, and usage with TensorFlow, learning some handy tricks on the way.


All things neural

In his session, Nick McClure of PayScale took a close look at neural networks. He introduced the audience to a basic unit of a neural network—an operational gate—and explained how to make use of multiple gates. Then, Nick moved on to:

  • loss functions
  • learning rate (it determines how much of a change can be applied to model parameters)
  • logistic regression as a neural network
  • activation functions

Nick outlined that neural networks can have a bunch of “hidden layers” and it’s possible to make them as deep as wanted. He also mentioned that neural networks can have as many inputs / outputs as necessary.

Nick overviewed the perks of TensorFlow as a library for deep learning, highlighting the following aspects:

Overviewing convolutional neural networks (CNN), Nick touched upon reduction of parameters and showed some tricks to try out: pooling and dropout. He also talked about using a regional CNN and a recurrent neural network for image captioning.

You can find Nick’s “TensorFlow Machine Learning Cookbook” on his GitHub’s profile.


Want details? Watch the video!


Join the meetup group to get informed about the upcoming events.


Related slides


Further reading


About the expert

Nick McClure is a senior data scientist for PayScale, where he works on machine learning and natural language processing algorithms. Prior to joining PayScale, he worked on the Zestimate team at Zillow and as a gaming statistician and data scientist at Caesars Entertainment in Las Vegas. He has worked on various topics, such as house price estimation, image recognition, casino game design, optimal slot machine placement, and predicting customer worth. He received a National Science Foundation Integrative Graduate Education and Research Traineeship fellowship for studying infectious disease while working on his master’s and Ph.D. in applied mathematics from the University of Montana.

Performance of Deep Learning Frameworks