Practical Deep Learning
Hands-on course on deep learning training and inference for engineersand programmers using Python and Keras (TensorFlow)
Standard Level - 5 daysview dates and locationsCOURSE UPDATED JULY 2019 - Watch a video about this training now » |
Practical Deep Learning is designed to meet the needs of competent professionals, already working as engineers or computer programmers, who are looking for a solid introduction to the subject of deep learning training and inference combined with sufficient practical, hands-on training to enable them to start implementing their own deep learning systems.
Many vendors now offer deep neural network models off-the-shelf for you to use in your own applications. What the vendors might not tell you is that the level of expertise required to collect and curate a training dataset and then adapt and re-train a pre-existing model for your own application is not so different from the level of expertise you would need to create such a model from scratch. This 5-day class will teach you all you need to get started with confidence, reducing your learning curve from months to days.
The course is based on the Python programming language and makes extensive use of the Keras neural network API, the approved high-level API of the TensorFlow machine learning framework, as well as Numpy, Matplotlib, Pandas, Scikit-learn, and TensorBoard. Although based on Keras, the principles and concepts taught in this training course would be equally applicable in any deep learning library or framework.
Practical Deep Learning is delivered as a 5-day public face-to-face training course. Workshops comprise approximately 50% of class time and are based around carefully designed hands-on exercises to reinforce learning.
Also available now:
Essential Python: 2 days: For professionals working in electronic systems hardware and embedded software. View full course details »
Why choose this particular course?
There are plenty of materials and training options out there for machine learning and deep learning, everything from e-books and blogs through online tutorials and MooCs to the formal courses offered by universities and corporations. Some of these are excellent, and they each have their place. The challenge is knowing where to start and having the time to research and study what you need to know. As a specialist provider of technical training, Doulos has been able to condense the essential knowledge and skills you need to get started with deep learning into this one 5-day training course.
Who should attend?
Engineers, programmers, or other people with a technical or mathematical background who want a comprehensive, hands-on introduction to the subject of deep learning.
What will you learn?
- The basics of Python programming using Jupyter Notebook
- The principles and practices of supervised learning and deep learning
- How to use neural networks to solve regression and classification problems
- How to use unsupervised learning for visualization and dimensionality reduction
- How to use convolutional neural networks for image classification
- How to use Keras and TensorBoard
- How to perform inference using pre-built neural network models
- How to take advantage of pre-trained neural network models using transfer learning
- How to prepare and curate datasets for deep learning
- Deep learning concepts and techniques in current use such as gradient descent algorithms, learning curves, regularization, dropout, batch normalization, the Inception architecture, residual networks, pruning and quantization, the MobileNet architecture, word embeddings, and recurrent neural networks
- An introduction to generative adversarial networks and object detection
- The principles behind the growing number of vendor-specific flows now available for deploying neural network models for inference in the cloud and in embedded devices
What this course is not!
This course is not a high-level overview of deep learning for managers, business developers, or end users of machine learning technology, although attendees will certainly gain an excellent overview of deep learning by attending this course. This is a detailed, hands-on course.
This is not a course in machine learning or artificial intelligence as such, but in deep learning. Deep learning (neural networks) is one specific branch of machine learning, which is a branch of artificial intelligence.
This is not a course in mathematics, statistics, or data science. This course assumes you already have the necessary mathematical background (see prerequisites below).
This course is not for professional mathematicians or machine learning researchers. It is for programmers and implementers.
Pre-requisites
Attendees should be experienced and competent in at least one object-oriented programming language (e.g. Python, Ruby, C++, C#, Java, or SystemVerilog). Prior programming experience with Python would be useful but is not required.
Attendees should be familiar with the following mathematical concepts:
- Continuous functions of one or more variable, linear and non-linear functions, exponential functions
- Very basic differential calculus - derivatives and partial derivatives
- Very basic statistics - mean, standard deviation, variance, probability, histograms, normal distribution
- Basic linear algebra - vectors, matrices, summation, dot product
Attendees do not need a university degree in mathematics. The emphasis of this course is on practical computer programming, not on mathematical theory. This course does not require the attendee to write or solve mathematical equations, nor does it require the attendee to read or understand any mathematical proofs. However, attendees do need a willingness to immerse themselves in what is essentially a mathematical topic.
Training materials
Doulos training materials are renowned for being the most comprehensive and user friendly available. Their style, content and coverage is unique, and has made them sought after resources in their own right. You get to keep the following materials at the end of the course:
- A fully indexed set of class notes that form a complete reference manual
- Jupyter Notebooks containing complete working code for all of the neural networks presented during the training class, which you can use after the class for revision or as the basis for your own networks.
Structure and content
DAY 1
Introduction to Deep Learning
AI versus ML versus Deep Learning • "Classical" Machine Learning • Deep Learning • Supervised Learning • Unsupervised Learning • Neural Networks • Cloud versus Edge Computing • Applications • Libraries/Frameworks for Training • Cloud Platforms for Training and Inference • Vendor Platforms for Deploying ML / DL
Jupyter Notebook
AWS Deep Learning AMI • Connect to Remote Machine using SSH • Using Jupyter Notebook • Basic Markdown • Output and Evaluating Expressions • Expanding, Collapsing, Hiding Output • Menus and Tool Bar
Python Basics
Functions, Variables, and Values • Control Statements • Operators • Imports • Instance Objects • Kwd arguments • Range • Format • Tuples and Lists • Functions returning Functions • Numpy Array • Multi-dimensional Numpy array • Reshape • Slices • Elementwise and Scalar Operations • One-Hot Encoding • Reduction Operations • matplotlib.pyplot • List Operations
Linear Regression
Regression Task • Defining a Model • Cost / Loss / Error Function • Cost as a Function of Trainable Parameters • Mathematical Optimization • Contour Plot of Cost Function • Gradient Descent • Stochastic Gradient Descent
Keras
TensorFlow and Keras • Versions of Keras • A Keras Sequential Model • Datasets • Gradient Descent in Keras • History Object • Plotting Progress • Model, Optimizer, and Weights
DAY 2
Logistic Regression
Classification Task • One-Hot Labels • The Hypothesis or Model • Calculating the Cost Function • Converting Scores to Probabilities • The Softmax Function • Compare using Cross-Entropy • Multinomial Logistic Regression • Plotting the Decision Boundary • Choosing the Loss Function
Neural Networks
Neural Networks • An Artificial Neuron • Common Activation Functions • A Deep Neural Network • Forward and Back-Propagation • Kinds of Neural Network
Non-linear Regression
Linear Regression • A Non-Linear Polynomial Model • The Rectified Linear Unit (ReLU) • Normalizing the Data • Exploding and Vanishing Gradients • Varying the Weight Distribution • Xavier Glorot Initialization • Non-Linear Keras Model • The Magic of Deep Neural Networks
Non-linear Classification
A Non-Linear Decision Boundary • Decision Boundary and Softmax • Non-Linear Neural Network for Classification • From ReLU to Decision Boundary • Softmax
Overfitting and Regularization
Training versus Test Datasets • Scikit-learn • Learning Curves • Matching the Network to the Problem • How to Reduce Overfitting? • More Data? • Regularization (L2 Regularization) • Choosing Lambda • L2 versus L1 Regularization
DAY 3
Stochastic Gradient Descent
Full-Batch vs Stochastic Gradient Descent • Mini-Batches • The Landscape of the Cost Function • Stationary Points • Learning Rate • Learning Rate Decay Schedule • Momentum • Nesterov Momentum • Adaptive Per-Parameter Learning Rates • Adam Algorithm
Splitting the Dataset
The MNIST Dataset • A Deep Neural Network for Classification • Hyperparameters • Training, Validation, and Test Datasets • K-Fold Cross-Validation • Validatation • Choose a Single Scalar Metric • Imbalanced Classes or Rare Events • ROC Curve • Trading off Precision and Recall
Convolutional Neural Networks
Convolution • Patch Size and Stride • Valid Padding versus Same Padding • Network Size • Multiple Features Maps • Pooling • Stride Versus Receptive Field • Hierarchical Feature Detection • Filter Visualization • Number of Parameters and Values • Plotting Convolution Filters
Visualization using TensorBoard
Visualizing a Graph in TensorBoard • Visualizing Scalars in TensorBoard • Visualizing Multiple Runs • Visualizing Weights and Activations • Highlighting a Histogram • Visualizing an Embedding in 3-D Space • Keras fit versus fit_generator • Using TensorBoard from Keras
PCA, t-SNE, and K-Means
Dimensionality Reduction • Principal Component Analysis • PCA for Visualization • t-SNE for Visualization • Clustering with K-Means • PCA for Dimensionality Reduction • Linear Regression Out-of-the-Box • Normalizing the Dataset
DAY 4
Data Preparation
The Deep Learning Process • CRISP-DM • Feature Engineering • Correlated Features, Missing or Bad Values • Choose Meaningful Features • Avoid Magic Values • Pandas Dataframe • Pandas Summary Statistics • Pandas Scatter Matrix • Cleaning Data with Pandas • TensorFlow Extended (TFX) • Error Analysis • Artificial Data Synthesis • Data Augmentation
Dropout and Batch Normalization
Dropout • When to use Dropout? • Batch Normalization • Scale-and-Shift • Benefits of Batch Normalization • Calculating the Scaling Factors
Inception and Residual Networks
General Principles of Network Architecture • Evolution of CNN Architectures • Principles of the Inception Architecture • Fully-Connected versus Sparse • Inception Module • Global Average Pooling • Fully Convolutional Network • Residual Networks • Matching Dimensions • Performance of Inception and ResNet
Transfer Learning
Why Transfer Learning? • Re-use trained weights • Simple Transfer Learning in Keras • A Pre-trained Inception Network • Fine-Tuning Previous Layers • Saving and Loading • Make Each Class Visually Distinct • Other Tips for Image Data
DAY 5
Pruning and Quantization
Inference Engines at the Edge • ML / DL Tool Flow for Edge Computing • Neural Network Exchange Formats • Network Pruning • Quantization • 8-Bit Quantization • TensorFlow Lite Post-Training Quantization • OpenCV • OpenVX • OpenCL
MobileNet
MobileNet • Depthwise-Separable Convolution • MobileNet V1 Architecture • MobileNet V2 Architecture • Hyperparameters • MobileNet Family • MobileNet Inference in TensorFlow • CNNs Compared
Encoding and Word Embedding
Coding Categorical Data • Binning • Text in a Neural Network • Word Embedding and Semantics • Hidden Layers and Latent Features • The Word2vec Algorithm • Negative Sampling Neural Network • Preparing the Training Dataset • Frequency Counts
Recurrent Neural Networks
Recurrent Neural Network (RNN) • RNN Applications • Long Short Term Memory – LSTM • LSTM Gates • LSTM Connections • Gated Recurrent Unit – GRU • Simple Character-Level RNN in Keras • LSTM trained on Linux Source Code • Bidirectional LSTM • Networks for Images, Sound, Text, Video
GANs
Putting Networks End-to-End • Generative Adversarial Network (GAN) • Deep Convolutional GAN Generator • Generated Images • GAN to Generate MNIST Digits
Object Detection
Datasets for Object Detection • Google Open Images Dataset • Networks for Object Detection • Object Detection – Faster R-CNN • Training a Faster R-CNN Network • Image Segmentation using Mask R-CNN • Pixel-Level Image Segmentation
WATCH AN INTRODUCTORY VIDEO ABOUT THIS TRAINING NOW »
Course Dates: | |||
---|---|---|---|
March 2nd, 2020 | Austin, TX | Enquire | |
March 9th, 2020 | Ringwood, UK | Enquire | |
March 30th, 2020 | Ankara, TR | Enquire | |
March 30th, 2020 | Chicago, IL | Enquire | |
April 20th, 2020 | Paris, FR | Enquire | |
April 20th, 2020 | San Jose, CA | Enquire | |
April 27th, 2020 | Columbia, MD | Enquire | |
May 25th, 2020 | Munich, DE | Enquire | |
June 1st, 2020 | Boston, MA | Enquire | |
June 8th, 2020 | Ringwood, UK | Enquire | |
June 15th, 2020 | Austin, TX | Enquire | |
July 6th, 2020 | Columbia, MD | Enquire | |
July 13th, 2020 | Ankara, TR | Enquire | |
August 3rd, 2020 | Paris, FR | Enquire | |
August 17th, 2020 | San Jose, CA | Enquire | |
August 24th, 2020 | Columbia, MD | Enquire | |
September 14th, 2020 | Munich, DE | Enquire | |
September 14th, 2020 | Austin, TX | Enquire | |
September 21st, 2020 | Ringwood, UK | Enquire | |
indicates CONFIRMED TO RUN courses. |
Looking for team-based training, or other locations?
Complete an on-line form and a Doulos representative will get back to you »
Price on request
Back to top