• GitHub Code
  • Multi-Layer Perceptron Neural Networks

    Crash course in the terminology and processes used in the field of multi-layer perceptron artificial neural networks.
    The building blocks of neural networks including neurons, weights and activation functions. How the building blocks are used in layers to create networks.
    How networks are trained from example data.

  • GitHub Code
  • Life cycles of neural network models

    Discovering the step-by-step life-cycle for creating, training and evaluating deep learning neural networks in Keras and how to make predictions with a trained model.

  • GitHub Code
  • Introduction to Convolutional neural networks

    Discovering Convolutional Neural Networks for deep learning, also called ConvNets or CNNs.

  • GitHub Code
  • Recurrent Neural Networks

    The limitations of Multilayer Perceptrons that are addressed by recurrent neural networks.
    The problems that must be addressed to make Recurrent Neural networks useful.
    The details of the Long Short-Term Memory networks used in applied deep learning.

  • GitHub Code
  • Visualize the performance of Deep learning model

    In this post you will discover how you can review and visualize the performance of deep learning models over time during training in Python with Keras.

  • GitHub Code
  • Regularization in Deep Learning Models

    How the dropout regularization technique works.
    How to use dropout on your input layers.
    How to use dropout on your hidden layers.
    How to tune the dropout level on your problem..

  • GitHub Code
  • Grid Search Hyperparameters for Deep Learning Models

    How to wrap Keras models for use in scikit-learn and how to use grid search.
    How to grid search common neural network parameters such as learning rate, dropout rate, epochs and number of neurons.
    How to define your own hyperparameter tuning experiments on your own projects.

  • GitHub Code
  • Handwritten Digit Recognition

    How to load the MNIST dataset in Keras. How to develop and evaluate a baseline neural network model for the MNIST problem.
    How to implement and evaluate a simple Convolutional Neural Network for MNIST.
    How to implement a close to state-of-the-art deep learning model for MNIST.

  • GitHub Code
  • Object Recognition with Convolutional Neural Networks

    About the CIFAR-10 object recognition dataset and how to load and use it in Keras.
    How to create a simple Convolutional Neural Network for object recognition.
    How to lift performance by creating deeper Convolutional Neural Networks.

  • GitHub Code
  • Predict Sentiment From Movie Reviews

    About the IMDB sentiment analysis problem for natural language processing and how to load it in Keras.
    How to use word embedding in Keras for natural language problems.
    How to develop and evaluate a multi-layer perception model for the IMDB problem.
    How to develop a one-dimensional convolutional neural network model for the IMDB problem.

  • GitHub Code
  • Save and Load Your Keras Deep Learning Models

    We will discover how to save Keras models to file and load them up again to make predictions.

  • GitHub Code
  • Text Generation With LSTM Recurrent Neural Networks

    Where to download a free corpus of text that you can use to train text generative models. How to frame the problem of text sequences to a recurrent neural network generative model. How to develop an LSTM to generate plausible text sequences for a given problem.

  • GitHub Code
  • Understanding Stateful LSTM Recurrent Neural Networks

    How to develop a naive LSTM network for a sequence prediction problem.
    How to carefully manage state through batches and features with an LSTM network.
    Hot to manually manage state in an LSTM network for stateful prediction.

  • GitHub Code
  • A Gentle Introduction to Backpropagation Through Time

    What Backpropagation Through Time is and how it relates to the Backpropagation training algorithm used by Multilayer Perceptron networks. The motivations that lead to the need for Truncated Backpropagation Through Time, the most widely used variant in deep learning for training LSTMs. A notation for thinking about how to configure Truncated Backpropagation Through Time and the canonical configurations used in research and by deep learning libraries.

  • GitHub Code
  • Attention in Long Short-Term Memory RNN

    The limitation of the encode-decoder architecture and the fixed-length internal representation. The attention mechanism to overcome the limitation that allows the network to learn where to pay attention in the input sequence for each item in the output sequence. 5 applications of the attention mechanism with recurrent neural networks in domains such as text translation, speech recognition, and more.

  • GitHub Code
  • Encoder-Decoder Long Short-Term Memory Networks

    The challenge of sequence-to-sequence prediction. The Encoder-Decoder architecture and the limitation in LSTMs that it was designed to address. How to implement the Encoder-Decoder LSTM model architecture in Python with Keras.

  • GitHub Code
  • Introduction to Generative Long Short-Term Memory Networks

    About generative models, with a focus on generative models for text called language modeling. Examples of applications where LSTM Generative models have been used. Examples of how to model text for generative models with LSTMs.

  • GitHub Code
  • TimeDistributed Layer for Long Short-Term Memory Networks

    How to design a one-to-one LSTM for sequence prediction. How to design a many-to-one LSTM for sequence prediction without the TimeDistributed Layer. How to design a many-to-many LSTM for sequence prediction with the TimeDistributed Layer.