top of page

Master Classes

Deep Learning

Only for Members

The following list is the links to the classrooms of the Deep Learning.

free eng web

Study guide for Introducation to Deep Learning

The following information provides study guideline to help people study Deep Learning by themselves. These Youtube videos will provide enough information to people so they can get ready to take Andrew Ng's Deep Learning Specialist Program. Sedisbus Education provides limited number of online classes to learn together the topics. For more information about it please contact Sedibus Education.

1. Applications of Deep Learning

(Week 1)

2. Let's try to hear what a Google employer talks about Deep Learning

(Week 2)

3. Let's learn about the foundation concept of Deep Learning

4. Now let,s go back to the Google employer's talk and see how much more you understood.

(Week 5)

5. Let's set up  the system for actual coding 

(Week 6)

6. Let's build an application

(Week 7)

7. Let's build more advanced one (optional)

How to Classify Photos of Dogs and Cats (with 97% accuracy)


Image Classifier - Cats🐱 vs Dogs🐶

Leveraging Convolutional Neural Networks (CNNs) and Google Colab’s Free GPU

A friendly introduction to Convolutional Neural Networks and Image Recognition

Statistics for Deep Learning

Deep Learning ( Andrew Ng)

Tuning Process (C2W3L01)


Using an Appropriate Scale (C2W3L02)


Hyperparameter Tuning in Practice (C2W3L03)


Normalizing Activations in a Network (C2W3L04)

     Normalizing Activations in a Network (C2W3L04)


Why Regularization Reduces Overfitting (C2W1L05)

Dropout Regularization (C2W1L06)

Understanding Dropout (C2W1L07)

Other Regularization Methods (C2W1L08)

Normalizing Inputs (C2W1L09)

Vanishing/Exploding Gradients (C2W1L10)

Weight Initialization in a Deep Network (C2W1L11)

Weight Initialization explained | A way to reduce the vanishing gradient problem

Numerical Approximations of Gradients (C2W1L12)

Gradient Checking (C2W1L13)

Gradient Checking Implementation Notes (C2W1L14)

Mini Batch Gradient Descent (C2W2L01)

Understanding Mini-Batch Gradient Dexcent (C2W2L02)

Exponentially Weighted Averages (C2W2L03)

Understanding Exponentially Weighted Averages (C2W2L04)

Bias Correction of Exponentially Weighted Averages (C2W2L05)

Gradient Descent With Momentum (C2W2L06)

RMSProp (C2W2L07)

Adam Optimization Algorithm (C2W2L08)

Learning Rate Decay (C2W2L09)

Tuning Process (C2W3L01)

Using an Appropriate Scale (C2W3L02)

Hyperparameter Tuning in Practice (C2W3L03)

Normalizing Activations in a Network (C2W3L04)

Fitting Batch Norm Into Neural Networks (C2W3L05)

Why Does Batch Norm Work? (C2W3L06)

Batch Norm At Test Time (C2W3L07)

Softmax Regression (C2W3L08)

Training Softmax Classifier (C2W3L09)

TensorFlow (C2W3L11)

Improving Model Performance (C3W1L01)

Orthogonalization (C3W1L02 )

Single Number Evaluation Metric (C3W1L03)

Satisficing and Optimizing Metrics (C3W1L04)

Train/Dev/Test Set Distributions (C3W1L05)

Sizeof Dev and Test Sets (C3W1L06)

bottom of page