# Practical Deep Learning

**Hands-on training in deep learning for engineers and programmers**

using Python, TensorFlow, and Keras

using Python, TensorFlow, and Keras

## Standard Level - 5 daysIncludes Python Primer |
## Standard Level - 4 daysWithout Python Primer. Join course on day 2 |

**WATCH AN INTRODUCTORY VIDEO ABOUT THIS TRAINING NOW »**

Also available now:2 days: For professionals working in electronic systems hardware and embedded software. View full course details »

Essential Python:

**Practical Deep Learning** offers in-depth, hands-on training in deep learning for engineers and programmers. The course is intended to meet the needs of competent professionals already working as engineers or computer programmers who are looking for a solid introduction to the subject of deep learning combined with sufficient practical, hands-on training to enable them to start implementing their own deep learning systems.

The course is based on the Python programming language and makes extensive use of the TensorFlow machine learning framework and the Keras neural network API, as well as Numpy, Matplotlib, Pandas, Scikit-learn, and TensorBoard. Although based on TensorFlow and Keras, the principles and concepts taught in this training course would be equally applicable in any deep learning library or framework.

**Practical Deep Learning** is delivered as a 5-day public face-to-face training course. Attendees who already know Python can choose to skip day 1, the Python Primer. Attendees may join the 5-day course on day 1 or day 2.

Workshops comprise approximately 50% of class time and are based around carefully designed hands-on exercises to reinforce learning.

### Why choose this particular course?

There are plenty of materials and training options out there for machine learning and deep learning, everything from e-books and blogs through online tutorials and MooCs to the formal courses offered by universities and corporations. Some of these are excellent, and they each have their place. The challenge is knowing where to start and having the time to research and study what you need to know. As a specialist provider of technical training, Doulos has been able to condense the essential knowledge and skills you need to get started with deep learning into this one 5-day training course.

### Who should attend?

Engineers, programmers, or other people with a technical or mathematical background who want a comprehensive, hands-on introduction to the subject of deep learning.

### What will you learn?

- Day 1 - Python Primer
- Basic Python programming
- How to use Numpy and Matplotlib in the context of deep learning
- How to use Jupyter Notebook with a remote server

- Days 2-5
- The principles and practices of supervised learning and deep learning
- How to use neural networks to solve regression and classification problems
- How to use unsupervised learning for visualization and dimensionality reduction
- How to use convolutional neural networks for image classification
- How to use TensorFlow, TensorBoard, and Keras
- How to optimize and tune the performance of deep neural networks
- How to prepare datasets and manage the process around deep learning
- Deep learning concepts and techniques in current use such as gradient descent algorithms, learning curves, regularization, dropout, batch normalization, the Inception architecture, and residual networks
- An introduction to transfer learning, recurrent neural networks, generative adversarial networks, and image segmentation

### What this course is not!

This course is not a high-level overview of deep learning for managers, business developers, or end users of machine learning technology, although attendees will certainly gain an excellent overview of deep learning by attending this course. This is a detailed, hands-on course.

This is not a course in machine learning or artificial intelligence as such, but in deep learning. Deep learning is one specific branch of machine learning, which is a branch of artificial intelligence.

This is not a course in mathematics, statistics, or data science. This course assumes you already have the necessary mathematical background (see prerequisites below).

This course is not for professional mathematicians or machine learning researchers. It is for programmers and implementers.

### Pre-requisites

Attendees should be experienced and competent in at least one object-oriented programming language (e.g. Python, Ruby, C++, C#, Java, or SystemVerilog). Candidates who are unfamiliar with object-oriented programming may still attend if they are very confident of their programming skills.

Attendees who are unfamiliar with Python **must** attend all 5 days. Attendees who are already familiar with Python and Numpy may skip the first day. Attendees who choose to skip the first day should familiarize themselves with Numpy and Jupyter Notebook before attending the course. Attendees who join the course on day 2 but do not know Numpy **will struggle**!

Attendees should be familiar with the following mathematical concepts:

- Continuous functions of one or more variables, linear and non-linear functions, exponential functions
- Very basic differential calculus - derivatives and partial derivatives
- Very basic statistics - mean, standard deviation, variance, probability, histograms, normal distribution
- Basic linear algebra - vectors, matrices, summation, dot product

Attendees do **not** need a university degree in mathematics.
The emphasis of this course is on practical computer programming, not on mathematical theory.
This course does not require the attendee to write or solve mathematical equations, nor does it require the attendee to read or understand any mathematical proofs.
However, attendees do need a willingness to immerse themselves in what is essentially a mathematical topic.

### Training materials

Doulos training materials are renowned for being the most comprehensive and user friendly available. Their style, content and coverage is unique, and has made them sought after resources in their own right. You get to keep the following materials at the end of the course:

- A fully indexed set of class notes that form a complete reference manual
- A set of Jupyter Notebooks containing working examples of deep neural networks using TensorFlow and Keras

### Structure and content

#### DAY 1 - Python Primer (OPTIONAL)

#### Introduction to Python

About Python • The Python World • Which Version? • Python Implementations • Jupyter Notebook • pypi.python.org • pip – The Preferred Installer Program • ANACONDA

#### Jupyter Notebook

AWS Deep Learning AMI • Connect to Remote Machine using SSH • Basic Markdown • Output and Evaluating Expressions • Expanding, Collapsing, Hiding Output • Menus and Tool Bar

#### Language Basics

The Python Shell • Values • Integer Operators • Built-in Functions • String Operations • String Index • Slice • String Methods • Exceptions • Simple Formatting • The Format Method

#### Control Statements

Comments • Control Statements • Operators • Conditional Expressions • Function • global • Default and Keyword Arguments • assert Statements • Range

#### Lists, Iterators, and Generators

List • Function Returning a List • Some List Methods • Loops and Lists • Tuple • Dictionary • Iterator • List Comprehension • Generator Expression • Map • Lambda • zip • join

#### Files, Exceptions, and Classes

Writing a File • Reading from a File • Exceptions • Context Manager • Class • Duck Typing

#### Modules and the Standard Library

Modules • from ... import • Packages • docs.python.org • The Standard Library

#### Numpy and Matplotlib

The NumPy Array • A 2-D Array • More Dimensions • Arithmetic Series • Reshape • Manipulating Dimensions • Initializing Arrays • Random Arrays • Plotting a Function • Plotting a Histogram • Plotting an Array as a Grid • Sorting • Reduction Functions • In-place Operators • Elementwise Operations • Combining Arrays and Scalars • Broadcasting • Matrix Arithmetic • Array-of-Indices • Indexing with Array-of-Booleans

#### DAY 2

#### Introduction to Deep Learning

Machine Learning - Definition • Algorithms • Deep Learning • Supervised Learning • Unsupervised Learning • Supervised Learning with a Neural Network • Why Neural Networks Now? • Image Classification • Natural Language Translation • Other Exciting Applications • Kinds of Neural Network • Libraries/Frameworks for Training • Deep Learning Platforms and Toolkits • Deep Learning IP and Chips • Used on This Training Course

#### Linear Regression

Regression Task • Define a Hypothesis or Model or Network • Cost Function • Mathematical Optimization • Contour Plot of Cost Function • Gradient Descent Algorithm • Converging on the Minimum • Cost, Slope, Offset against Step • Stochastic Gradient Descent

#### TensorFlow

TensorFlow • TensorFlow Graphs • Linear Regression using TensorFlow • Data Flow Graph • Gradient Descent using TensorFlow • Minimal Working TensorFlow Code

#### Logistic Regression

Classification Task • One-Hot Labels • The Hypothesis or Model • Calculating the Cost Function • Converting Scores to Probabilities • The Softmax Function • Compare using Cross-Entropy • Multinomial Logistic Regression • Plotting the Decision Boundary

#### Neural Networks

Neural Networks • An Artificial Neuron • An Alternative Way to Express the Bias • A Single-Layer Perceptron • Common Activation Functions • A Deep Neural Network • Forward and Back-Propagation

#### DAY 3

#### Non-linear Regression

A Non-Linear Polynomial Model • The Rectified Linear Unit (ReLU) • Normalizing the Data • TensorFlow Data Flow Graph • TensorFlow Session • The Predicted Output • The Magic of Deep Neural Networks

#### Non-linear Classification

A Non-Linear Decision Boundary • Decision Boundary and Softmax • Non-Linear Neural Network for Classification • From ReLU to Decision Boundary

#### Overfitting and Regularization

Training versus Test Datasets • Scikit-learn • TensorFlow Placeholders • Learning Curves • Matching the Network to the Problem • How to Reduce Overfitting? • More Data? • L2 Regularization • Choosing Lambda • L2 versus L1 Regularization

#### Stochastic Gradient Descent

Full-Batch vs Stochastic Gradient Descent • Mini-Batches • The Landscape of the Cost Function • Learning Rate • Learning Rate Decay Schedule • Momentum • Nesterov Momentum • Adaptive Per-Parameter Learning Rates • Adam Algorithm

#### DAY 4

#### Splitting the Dataset

The MNIST Dataset • A Deep Neural Network for Classification • Hyperparameters • Training, Validation, and Test Datasets • K-Fold Cross-Validation • Choose a Single Scalar Metric

#### Convolutional Neural Networks

Patch Size and Stride • Network Size • Pooling • Hierarchical Feature Detection • Number of Parameters and Values • Inputs and Outputs for a CNN • Plotting the Convolution Filters

#### Weight Initialization

Exploding and Vanishing Gradients • Weight Initialization • Varying the Weight Distribution • Xavier Glorot Initialization

#### Visualization using TensorBoard

Visualizing a Graph in TensorBoard • Visualizing Scalars • Visualizing Multiple Runs • Visualizing Weights and Activations • Visualizing an Embedding in 3-D Space • Naming Layers and Nodes in the Graph • Saving Scalar Values and The Graph • Saving Weights and Activations • Setting up a Visualized Embedding

#### PCA, t-SNE, and K-Means

Dimensionality Reduction • Principal Component Analysis • PCA for Visualization • t-SNE for Visualization • Clustering with K-Means • PCA for Dimensionality Reduction

#### Keras

A Keras Sequential Model • Compile, Fit, and Evaluate • Full CNN using Keras • Using Keras with TensorBoard

#### DAY 5

#### Process and Data Preparation

The Deep Learning Process • Data Preparation • Pandas Dataframe • Pandas Summary Statistics • Pandas Scatter Matrix • Cleaning Data with Pandas • Error Analysis • Artificial Data Synthesis • Data Augmentation

#### Dropout and Batch Normalization

Dropout • When to use Dropout? • Batch Normalization • Benefits of Batch Normalization • Scale-and-Shift • Calculating the Scaling Factors

#### Inception and Residual Networks

General Principles of Network Architecture • Evolution of CNN Architectures • Principles of the Inception Architecture • Fully-Connected versus Sparse • Inception Module • Global Average Pooling • Fully Convolutional Network • Residual Networks • Matching Dimensions • Performance of Inception and ResNet

#### Transfer Learning

Why Transfer Learning? • Effect of Dataset Size on Transfer Learning • Simple Transfer Learning in Keras • A Pre-trained Inception Network • Fine-Tuning Previous Layers

#### Recurrent Neural Networks

Recurrent Neural Network (RNN) • RNN Applications • Long Short Term Memory - LSTM • Gated Recurrent Unit - GRU • Simple Character-Level RNN in Keras • Simple Word-Level RNN in Keras • Word Embedding

#### More Advanced Networks

Putting Networks End-to-End • Generative Adversarial Network (GAN) • GAN to Generate MNIST Digits • Image Segmentation - Regional CNN (R-CNN) • Networks for Image Segmentation

#### Weight Quantization

Avoiding Floating Point • 8-Bit Quantization • TensorFlow Script for 8-Bit Quantization

**WATCH THE VIDEO NOW!**

Course Dates: | |||
---|---|---|---|

August 20th, 2018 | San Jose, CA | Enquire | |

October 15th, 2018 | Munich, DE | Enquire | |

November 5th, 2018 | Ringwood, UK | Enquire | |

November 19th, 2018 | Ankara, TR | Enquire | |

December 3rd, 2018 | San Jose, CA | Enquire | |

indicates CONFIRMED TO RUN courses. |

### Looking for team-based training, or other locations?

Complete an on-line form and a Doulos representative will get back to you »

#### Price on request

Back to top