-
Урок 1.
00:04:07
Q1 - What is Deep Learning?
-
Урок 2.
00:04:14
Q2 - What is Deep Learning?
-
Урок 3.
00:07:14
Q3 - What is a Neural Network?
-
Урок 4.
00:03:34
Q4 - Explain the concept of a neuron in Deep Learning.
-
Урок 5.
00:07:53
Q5 - Explain architecture of Neural Networks in simple way
-
Урок 6.
00:04:01
Q6 - What is an activation function in a Neural Network?
-
Урок 7.
00:13:43
Q7 - Name few popular activation functions and describe them
-
Урок 8.
00:01:27
Q8 - What happens if you do not use any activation functions in a NN?
-
Урок 9.
00:05:53
Q9 - Describe how training of basic Neural Networks works
-
Урок 10.
00:10:41
Q10 - What is Gradient Descent?
-
Урок 11.
00:05:34
Q11 - What is the function of an optimizer in Deep Learning?
-
Урок 12.
00:08:39
Q12 - What is backpropagation, and why is it important in Deep Learning?
-
Урок 13.
00:03:05
Q13 - How is backpropagation different from gradient descent?
-
Урок 14.
00:07:01
Q14 - Describe what Vanishing Gradient Problem is and it’s impact on NN
-
Урок 15.
00:08:31
Q15 - Describe what Exploding Gradients Problem is and it’s impact on NN
-
Урок 16.
00:04:40
Q16 - There is a neuron results in a large error in backpropagation. Reason?
-
Урок 17.
00:06:18
Q17 - What do you understand by a computational graph?
-
Урок 18.
00:06:39
Q18 - What is Loss Function and what are various Loss functions used in DL?
-
Урок 19.
00:03:41
Q19 - What is Cross Entropy loss function and how is it called in industry?
-
Урок 20.
00:03:40
Q20 - Why is Cross-entropy preferred as cost function for multi-class classification?
-
Урок 21.
00:06:11
Q21 - What is SGD and why it’s used in training Neural Networks?
-
Урок 22.
00:05:52
Q22 - Why does stochastic gradient descent oscillate towards local minima?
-
Урок 23.
00:05:19
Q23: How is GD different from SGD
-
Урок 24.
00:06:04
Q24: What is SGD with Momentum
-
Урок 25.
00:05:27
Q25 - Batch Gradient Descent, Minibatch Gradient Descent vs SGD
-
Урок 26.
00:06:49
Q26: What is impact of Batch Size
-
Урок 27.
00:04:10
Q27: Batch Size vs Model Performance
-
Урок 28.
00:04:39
Q28: What is Hessian, usage in DL
-
Урок 29.
00:05:29
Q29: What is RMSProp and how does it work?
-
Урок 30.
00:04:33
Q30: What is Adaptive Learning
-
Урок 31.
00:07:03
Q31: What is Adam Optimizer
-
Урок 32.
00:04:53
Q32: What is AdamW Algorithm in Neural Networks
-
Урок 33.
00:08:32
Q33: What is Batch Normalization
-
Урок 34.
00:03:39
Q34: What is Layer Normalization
-
Урок 35.
00:09:23
Q35: What are Residual Connections
-
Урок 36.
00:03:41
Q36: What is Gradient Clipping
-
Урок 37.
00:04:05
Q37: What is Xavier Initialization
-
Урок 38.
00:03:16
Q38: What are ways to solve Vanishing Gradients
-
Урок 39.
00:01:12
Q39: How to solve Exploding Gradient Problem
-
Урок 40.
00:02:39
Q40: What is Overfitting
-
Урок 41.
00:05:19
Q41: What is Dropout
-
Урок 42.
00:00:42
Q42: How does Dropout prevent Overfitting in Neural Networks
-
Урок 43.
00:04:42
Q43: Is Dropout like Random Forest
-
Урок 44.
00:02:36
Q44: What is the impact of DropOut on the training vs testing
-
Урок 45.
00:03:19
Q45: What are L2 and L1 Regularizations for Overfitting NN
-
Урок 46.
00:04:05
Q46: What is the difference between L1 and L2 Regularisations
-
Урок 47.
00:01:52
Q47: How do L1 vs L2 Regularization impact the Weights in a NN?
-
Урок 48.
00:02:28
Q48: What is the Curse of Dimensionality in Machine Learning | Deep Learning Interview Question
-
Урок 49.
00:04:05
Q49 - How Deep Learning models tackle the Curse of Dimensionality | Deep Learning Interview Question
-
Урок 50.
00:02:58
Q50: What are Generative Models, give examples?
-
Урок 51.
00:03:04
Q51 - What are Discriminative Models, give examples?
-
Урок 52.
00:08:35
Q52 - What is the difference between generative and discriminative models?
-
Урок 53.
00:04:31
Q53 - What are Autoencoders and How Do They Work?
-
Урок 54.
00:04:32
Q54: What is the Difference Beetween Autoenconders and other Neural Networks?
-
Урок 55.
00:01:25
Q55 - What are some popular autoencoders, mention few?
-
Урок 56.
00:01:04
Q56 - What is the role of the Loss function in Autoencoders, & how is it different from other NN?
-
Урок 57.
00:02:21
Q57 - How do autoencoders differ from (PCA)?
-
Урок 58.
00:03:27
Q58 - Which one is better for reconstruction linear autoencoder or PCA?
-
Урок 59.
00:06:31
Q59 - How can you recreate PCA with neural networks?
-
Урок 60.
00:10:36
Q60 - Can You Explain How Autoencoders Can be Used for Anomaly Detection?
-
Урок 61.
00:02:20
Q61 - What are some applications of AutoEncoders
-
Урок 62.
00:04:09
Q62 - How can uncertainty be introduced into Autoencoders, & what are the benefits and challenges of doing so?
-
Урок 63.
00:03:18
Q63 - Can you explain what VAE is and describe its training process?
-
Урок 64.
00:03:48
Q64 - Explain what Kullback-Leibler (KL) divergence is & why does it matter in VAEs?
-
Урок 65.
00:01:02
Q65 - Can you explain what reconstruction loss is & it’s function in VAEs?
-
Урок 66.
00:04:35
Q66 - What is ELBO & What is this trade-off between reconstruction quality & regularization?
-
Урок 67.
00:03:49
Q67 - Can you explain the training & optimization process of VAEs?
-
Урок 68.
00:03:12
Q68 - How would you balance reconstruction quality and latent space regularization in a practical Variational Autoencoder implementation?
-
Урок 69.
00:04:15
Q69 - What is Reparametrization trick and why is it important?
-
Урок 70.
00:01:45
Q70 - What is DGG "Deep Clustering via a Gaussian-mixture Variational Autoencoder (VAE)” with Graph Embedding
-
Урок 71.
00:02:24
Q71 - How does a neural network with one layer and one input and output compare to a logistic regression?
-
Урок 72.
00:01:06
Q72 - In a logistic regression model, will all the gradient descent algorithms lead to the same model if run for a long time?
-
Урок 73.
00:05:10
Q73 - What is a Convolutional Neural Network?
-
Урок 74.
00:02:02
Q74 - What is padding and why it’s used in Convolutional Neural Networks (CNNs)?
-
Урок 75.
00:13:18
Q75 - Padded Convolutions: What are Valid and Same Paddings?
-
Урок 76.
00:05:43
Q76 - What is stride in CNN and why is it used?
-
Урок 77.
00:02:28
Q77 - What is the impact of Stride size on CNNs?
-
Урок 78.
00:09:12
Q78 - What is Pooling, what is the intuition behind it and why is it used in CNNs?
-
Урок 79.
00:02:49
Q79 - What are common types of pooling in CNN?
-
Урок 80.
00:03:47
Q80 - Why min pooling is not used?
-
Урок 81.
00:01:36
Q81 - What is translation invariance and why is it important?
-
Урок 82.
00:02:55
Q82 - How does a 1D Convolutional Neural Network (CNN) work?
-
Урок 83.
00:07:09
Q83 - What are Recurrent Neural Networks, and walk me through the architecture of RNNs.
-
Урок 84.
00:01:30
Q84 - What are the main disadvantages of RNNs, especially in Machine Translation Tasks?
-
Урок 85.
00:06:16
Q85 - What are some applications of RNN?
-
Урок 86.
00:05:05
Q86 - What technique is commonly used in RNNs to combat the Vanishing Gradient Problem?
-
Урок 87.
00:05:24
Q87 - What are LSTMs and their key components?
-
Урок 88.
00:06:16
Q88 - What limitations of RNN that LSTMs do and don’t address and how?
-
Урок 89.
00:03:35
Q89 - What is a gated recurrent unit (GRU) and how is it different from LSTMs?
-
Урок 90.
00:06:17
Q90 - Describe how Generative Adversarial Networks (GANs) work and the roles of the generator and discriminator in learning.
-
Урок 91.
00:04:10
Q91 - Describe how would you use GANs for image translation or creating photorealistic images?
-
Урок 92.
00:04:12
Q92 - How would you address mode collapse and vanishing gradients in GAN training, and what is their impact on data quality?
-
Урок 93.
00:09:04
Q93- Minimax and Nash Equilibrium in GAN
-
Урок 94.
00:06:01
Q94 - What are token embeddings and what is their function?
-
Урок 95.
00:11:26
Q95 - What is self-attention mechanism?
-
Урок 96.
00:06:54
Q96 - What is Multi-Head Self-Attention and how does it enable more effective processing of sequences in Transformers?
-
Урок 97.
00:05:52
Q97 - What are transformers and why are they important in combating problems of models like RNN and LSTMs?
-
Урок 98.
00:08:56
Q98 - Walk me through the architecture of transformers.
-
Урок 99.
00:05:48
Q99 - What are positional encodings and how are they calculated?
-
Урок 100.
00:02:13
Q100 - Why do we add positional encodings to Transformers but not to