What Is Behind Deep Reinforcement Learning and Transfer Learning with TensorFlow?
At the recent TensorFlow meetup in Madrid, the speakers explored the concept of deep reinforcement learning and learnt how to train a model with little data available.
The concept of deep reinforced learning
Gema Parreño Piqueras of Tetuan Valley started her session with explaining what deep reinforced learning is. According to her, it comprises two subfields of artificial intelligence. Implying “teaching through rewards,” reinforcement learning is responsible for decision making, while deep learning is based on a combination of mathematical possibilities.
”Artificial intelligence uses the advantages of both the reward method and computational power maths gives you.” —Gema Piqueras, Tetuan Valley
Gema also introduced the audience to the concept of an artificial neuron, which is an input with an assigned transfer function to it. Then, an activation function is assigned to a transfer to one and you get an output.
Moving to TensorFlow’s architecture, she explored its major components:
- Tensors that embody data structure
- Graphs responsible for graphic representation of the computational process
- Variables that help to define the structure of the neural model
- Neural networks dealing with complex tasks
Watch the video for more details.
Below is Gema’s full presentation from the meetup.
Training a model with little data
Watch the video to get more insights.
You can also look through Gorka’s presentation below.
Join our group to stay tuned with the upcoming events.
- Deep Learning in Healthcare, Finance, and IIoT: TensorFlow Use Cases
- Mastering Game Development with Deep Reinforcement Learning and GPUs
About the experts