In the second course of the Deep Learning Specialization, you will open the deep learning black box to understand the processes that drive performance and generate good results systematically.

----------- TIME STAMP ------------

PRACTICAL ASPECTS OF DEEP LEARNING
0:00:00 Train Dev Test Sets
0:12:04 Bias Variance
0:20:50 Basic Recipe for Machine Learning
0:27:12 Regularization
0:36:54 Why Regularization Reduces Overfitting
0:44:04 Dropout Regularization
0:53:29 Understanding Dropout
1:00:34 Other Regularization Methods
1:08:58 Normalizing Inputs
1:14:28 Vanishing Exploding Gradients
1:20:36 Weight Initialization for Deep Networks
1:26:48 Numerical Approximation of Gradients
1:33:23 Gradient Checking
1:39:57 Gradient Checking Implementation Notes
1:45:16 Yoshua Bengio Interview

OPTIMAZATION ALGORITHMS
2:11:04 Mini-batch Gradient Descent
2:22:33 Understanding Mini-batch Gradient Descent
2:33:52 Exponentially Weighted Averages
2:39:50 Understanding Exponentially Weighted Averages
2:49:32 Bias Correction in Exponentially Weighted Avarages
2:53:44 Gradient Descent with Momentum
3:03:04 RMSprop
3:10:45 Adam Optimization Algorithm
3:17:53 Learning Rate Decay
3:24:38 The Problem of Local Optima
3:30:01 Yuanqing Lin Interview

HYPERPARAMETER TUNING,BATCH NORMALIZATION AND PROGRAMMING FRAMEWORKS
3:43:38 Tunning Process
3:50:49 Using an Appropriate Scale to pick Hyperparameters
3:59:39 Hyperparameters Tuning in Practice Pandas Vs. Caviar
4:06:31 Normalization Activations in a Network
4:15:26 Fitting Batch Norm into a Neural Network
4:28:21 why Does Batch NOrm Work
4:40:01 Batch Norm at Test Time
4:45:47 Softmax Regression
4:57:34 Training a Softmax Classifier
5:07:42 Deep Learning Frameworks
5:11:58 TensorFlow

By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications; be able to use standard neural network techniques such as initialization, L2 and dropout regularization, hyperparameter tuning, batch normalization, and gradient checking; implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence; and implement a neural network in TensorFlow.

The Deep Learning Specialization is our foundational program that will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. It provides a pathway for you to gain the knowledge and skills to apply machine learning to your work, level up your technical career, and take the definitive step in the world of AI.

⭐ Important Notes ⭐
⌨️ This course is created in collaboration with Deeplearning.ai (Andrew NG)


#Tensorflow,
#DeepLearning,
#MathematicalOptimization,
#hyperparametertuning,
#RegularizationandOptimization,
#machinelearning,
#AndrewNg,
#Coursera,
#Deeplearning.ai,
#neuralnetworks,