Этот материал находится в платной подписке. Оформи премиум подписку и смотри или слушай TensorFlow Developer Certificate in 2025: Zero to Mastery, а также все другие курсы, прямо сейчас!
Премиум
  1. Урок 1. 00:05:22
    Course Outline
  2. Урок 2. 00:04:39
    What is deep learning?
  3. Урок 3. 00:09:39
    Why use deep learning?
  4. Урок 4. 00:10:27
    What are neural networks?
  5. Урок 5. 00:08:37
    What is deep learning already being used for?
  6. Урок 6. 00:07:57
    What is and why use TensorFlow?
  7. Урок 7. 00:03:38
    What is a Tensor?
  8. Урок 8. 00:04:30
    What we're going to cover throughout the course
  9. Урок 9. 00:05:34
    How to approach this course
  10. Урок 10. 00:18:46
    Creating your first tensors with TensorFlow and tf.constant()
  11. Урок 11. 00:07:08
    Creating tensors with TensorFlow and tf.Variable()
  12. Урок 12. 00:09:41
    Creating random tensors with TensorFlow
  13. Урок 13. 00:09:41
    Shuffling the order of tensors
  14. Урок 14. 00:11:56
    Creating tensors from NumPy arrays
  15. Урок 15. 00:11:58
    Getting information from your tensors (tensor attributes
  16. Урок 16. 00:12:34
    Indexing and expanding tensors
  17. Урок 17. 00:05:35
    Manipulating tensors with basic operations
  18. Урок 18. 00:11:54
    Matrix multiplication with tensors part 1
  19. Урок 19. 00:13:30
    Matrix multiplication with tensors part 2
  20. Урок 20. 00:10:04
    Matrix multiplication with tensors part 3
  21. Урок 21. 00:06:56
    Changing the datatype of tensors
  22. Урок 22. 00:09:50
    Tensor aggregation (finding the min, max, mean & more)
  23. Урок 23. 00:06:14
    Tensor troubleshooting example (updating tensor datatypes)
  24. Урок 24. 00:09:32
    Finding the positional minimum and maximum of a tensor (argmin and argmax) (9:31)
  25. Урок 25. 00:03:00
    Squeezing a tensor (removing all 1-dimension axes)
  26. Урок 26. 00:05:47
    One-hot encoding tensors
  27. Урок 27. 00:04:48
    Trying out more tensor math operations
  28. Урок 28. 00:05:44
    Exploring TensorFlow and NumPy's compatibility
  29. Урок 29. 00:10:20
    Making sure our tensor operations run really fast on GPUs
  30. Урок 30. 00:07:34
    Introduction to Neural Network Regression with TensorFlow
  31. Урок 31. 00:09:00
    Inputs and outputs of a neural network regression model
  32. Урок 32. 00:07:56
    Anatomy and architecture of a neural network regression model
  33. Урок 33. 00:12:47
    Creating sample regression data (so we can model it)
  34. Урок 34. 00:20:16
    The major steps in modelling with TensorFlow
  35. Урок 35. 00:06:03
    Steps in improving a model with TensorFlow part 1
  36. Урок 36. 00:09:26
    Steps in improving a model with TensorFlow part 2
  37. Урок 37. 00:12:34
    Steps in improving a model with TensorFlow part 3
  38. Урок 38. 00:07:25
    Evaluating a TensorFlow model part 1 ("visualise, visualise, visualise")
  39. Урок 39. 00:11:02
    Evaluating a TensorFlow model part 2 (the three datasets)
  40. Урок 40. 00:17:19
    Evaluating a TensorFlow model part 3 (getting a model summary)
  41. Урок 41. 00:07:15
    Evaluating a TensorFlow model part 4 (visualising a model's layers)
  42. Урок 42. 00:09:17
    Evaluating a TensorFlow model part 5 (visualising a model's predictions)
  43. Урок 43. 00:08:06
    Evaluating a TensorFlow model part 6 (common regression evaluation metrics)
  44. Урок 44. 00:05:53
    Evaluating a TensorFlow regression model part 7 (mean absolute error)
  45. Урок 45. 00:03:19
    Evaluating a TensorFlow regression model part 7 (mean square error)
  46. Урок 46. 00:13:51
    Setting up TensorFlow modelling experiments part 1 (start with a simple model)
  47. Урок 47. 00:11:30
    Setting up TensorFlow modelling experiments part 2 (increasing complexity)
  48. Урок 48. 00:10:21
    Comparing and tracking your TensorFlow modelling experiments
  49. Урок 49. 00:08:20
    How to save a TensorFlow model
  50. Урок 50. 00:10:16
    How to load and use a saved TensorFlow model
  51. Урок 51. 00:06:19
    (Optional) How to save and download files from Google Colab
  52. Урок 52. 00:13:32
    Putting together what we've learned part 1 (preparing a dataset)
  53. Урок 53. 00:13:21
    Putting together what we've learned part 2 (building a regression model)
  54. Урок 54. 00:15:48
    Putting together what we've learned part 3 (improving our regression model)
  55. Урок 55. 00:09:35
    Preprocessing data with feature scaling part 1 (what is feature scaling?)
  56. Урок 56. 00:10:58
    Preprocessing data with feature scaling part 2 (normalising our data)
  57. Урок 57. 00:07:41
    Preprocessing data with feature scaling part 3 (fitting a model on scaled data)
  58. Урок 58. 00:08:26
    Introduction to neural network classification in TensorFlow
  59. Урок 59. 00:06:39
    Example classification problems (and their inputs and outputs)
  60. Урок 60. 00:06:22
    Input and output tensors of classification problems
  61. Урок 61. 00:09:37
    Typical architecture of neural network classification models with TensorFlow
  62. Урок 62. 00:11:35
    Creating and viewing classification data to model
  63. Урок 63. 00:04:39
    Checking the input and output shapes of our classification data
  64. Урок 64. 00:12:11
    Building a not very good classification model with TensorFlow
  65. Урок 65. 00:09:14
    Trying to improve our not very good classification model
  66. Урок 66. 00:15:09
    Creating a function to view our model's not so good predictions
  67. Урок 67. 00:12:19
    Make our poor classification model work for a regression dataset
  68. Урок 68. 00:09:39
    Non-linearity part 1: Straight lines and non-straight lines
  69. Урок 69. 00:05:48
    Non-linearity part 2: Building our first neural network with non-linearity
  70. Урок 70. 00:10:19
    Non-linearity part 3: Upgrading our non-linear model with more layers
  71. Урок 71. 00:08:38
    Non-linearity part 4: Modelling our non-linear data once and for all
  72. Урок 72. 00:14:27
    Non-linearity part 5: Replicating non-linear activation functions from scratch
  73. Урок 73. 00:14:48
    Getting great results in less time by tweaking the learning rate
  74. Урок 74. 00:06:12
    Using the TensorFlow History object to plot a model's loss curves
  75. Урок 75. 00:17:33
    Using callbacks to find a model's ideal learning rate
  76. Урок 76. 00:09:21
    Training and evaluating a model with an ideal learning rate
  77. Урок 77. 00:06:05
    Introducing more classification evaluation methods
  78. Урок 78. 00:04:18
    Finding the accuracy of our classification model
  79. Урок 79. 00:08:28
    Creating our first confusion matrix (to see where our model is getting confused)
  80. Урок 80. 00:14:01
    Making our confusion matrix prettier
  81. Урок 81. 00:10:38
    Putting things together with multi-class classification part 1: Getting the data
  82. Урок 82. 00:07:08
    Multi-class classification part 2: Becoming one with the data
  83. Урок 83. 00:15:39
    Multi-class classification part 3: Building a multi-class classification model
  84. Урок 84. 00:12:44
    Multi-class classification part 4: Improving performance with normalisation
  85. Урок 85. 00:04:14
    Multi-class classification part 5: Comparing normalised and non-normalised data
  86. Урок 86. 00:10:39
    Multi-class classification part 6: Finding the ideal learning rate
  87. Урок 87. 00:13:17
    Multi-class classification part 7: Evaluating our model
  88. Урок 88. 00:04:27
    Multi-class classification part 8: Creating a confusion matrix
  89. Урок 89. 00:10:43
    Multi-class classification part 9: Visualising random model predictions
  90. Урок 90. 00:15:34
    What "patterns" is our model learning?
  91. Урок 91. 00:09:37
    Introduction to Computer Vision with TensorFlow
  92. Урок 92. 00:08:00
    Introduction to Convolutional Neural Networks (CNNs) with TensorFlow
  93. Урок 93. 00:08:28
    Downloading an image dataset for our first Food Vision model
  94. Урок 94. 00:05:06
    Becoming One With Data
  95. Урок 95. 00:12:27
    Becoming One With Data Part 2
  96. Урок 96. 00:04:23
    Becoming One With Data Part 3
  97. Урок 97. 00:18:18
    Building an end to end CNN Model
  98. Урок 98. 00:09:18
    Using a GPU to run our CNN model 5x faster
  99. Урок 99. 00:08:52
    Trying a non-CNN model on our image data
  100. Урок 100. 00:09:53
    Improving our non-CNN model by adding more layers
  101. Урок 101. 00:09:04
    Breaking our CNN model down part 1: Becoming one with the data
  102. Урок 102. 00:11:47
    Breaking our CNN model down part 2: Preparing to load our data
  103. Урок 103. 00:09:55
    Breaking our CNN model down part 3: Loading our data with ImageDataGenerator
  104. Урок 104. 00:08:03
    Breaking our CNN model down part 4: Building a baseline CNN model
  105. Урок 105. 00:15:21
    Breaking our CNN model down part 5: Looking inside a Conv2D layer
  106. Урок 106. 00:07:15
    Breaking our CNN model down part 6: Compiling and fitting our baseline CNN
  107. Урок 107. 00:11:46
    Breaking our CNN model down part 7: Evaluating our CNN's training curves
  108. Урок 108. 00:13:41
    Breaking our CNN model down part 8: Reducing overfitting with Max Pooling
  109. Урок 109. 00:06:53
    Breaking our CNN model down part 9: Reducing overfitting with data augmentation
  110. Урок 110. 00:15:05
    Breaking our CNN model down part 10: Visualizing our augmented data
  111. Урок 111. 00:08:50
    Breaking our CNN model down part 11: Training a CNN model on augmented data
  112. Урок 112. 00:10:02
    Breaking our CNN model down part 12: Discovering the power of shuffling data
  113. Урок 113. 00:05:22
    Breaking our CNN model down part 13: Exploring options to improve our model
  114. Урок 114. 00:04:55
    Downloading a custom image to make predictions on
  115. Урок 115. 00:10:01
    Writing a helper function to load and preprocessing custom images
  116. Урок 116. 00:10:09
    Making a prediction on a custom image with our trained CNN
  117. Урок 117. 00:15:00
    Multi-class CNN's part 1: Becoming one with the data
  118. Урок 118. 00:06:39
    Multi-class CNN's part 2: Preparing our data (turning it into tensors)
  119. Урок 119. 00:07:25
    Multi-class CNN's part 3: Building a multi-class CNN model
  120. Урок 120. 00:06:03
    Multi-class CNN's part 4: Fitting a multi-class CNN model to the data
  121. Урок 121. 00:04:52
    Multi-class CNN's part 5: Evaluating our multi-class CNN model (
  122. Урок 122. 00:12:20
    Multi-class CNN's part 6: Trying to fix overfitting by removing layers
  123. Урок 123. 00:11:47
    Multi-class CNN's part 7: Trying to fix overfitting with data augmentation
  124. Урок 124. 00:04:24
    Multi-class CNN's part 8: Things you could do to improve your CNN model
  125. Урок 125. 00:09:23
    Multi-class CNN's part 9: Making predictions with our model on custom images
  126. Урок 126. 00:06:22
    Saving and loading our trained CNN model
  127. Урок 127. 00:10:13
    What is and why use transfer learning?
  128. Урок 128. 00:14:40
    Downloading and preparing data for our first transfer learning model
  129. Урок 129. 00:10:02
    Introducing Callbacks in TensorFlow and making a callback to track our models
  130. Урок 130. 00:09:52
    Exploring the TensorFlow Hub website for pretrained models
  131. Урок 131. 00:14:01
    Building and compiling a TensorFlow Hub feature extraction model
  132. Урок 132. 00:09:14
    Blowing our previous models out of the water with transfer learning
  133. Урок 133. 00:07:36
    Plotting the loss curves of our ResNet feature extraction model
  134. Урок 134. 00:09:43
    Building and training a pre-trained EfficientNet model on our data
  135. Урок 135. 00:11:41
    Different Types of Transfer Learning
  136. Урок 136. 00:15:17
    Comparing Our Model's Results
  137. Урок 137. 00:06:17
    Introduction to Transfer Learning in TensorFlow Part 2: Fine-tuning
  138. Урок 138. 00:07:36
    Importing a script full of helper functions (and saving lots of space)
  139. Урок 139. 00:15:39
    Downloading and turning our images into a TensorFlow BatchDataset
  140. Урок 140. 00:02:16
    Discussing the four (actually five) modelling experiments we're running
  141. Урок 141. 00:02:35
    Comparing the TensorFlow Keras Sequential API versus the Functional API
  142. Урок 142. 00:11:39
    Creating our first model with the TensorFlow Keras Functional API
  143. Урок 143. 00:10:54
    Compiling and fitting our first Functional API model
  144. Урок 144. 00:13:40
    Getting a feature vector from our trained model
  145. Урок 145. 00:03:44
    Drilling into the concept of a feature vector (a learned representation)
  146. Урок 146. 00:09:52
    Downloading and preparing the data for Model 1 (1 percent of training data)
  147. Урок 147. 00:12:07
    Building a data augmentation layer to use inside our model
  148. Урок 148. 00:10:56
    Visualising what happens when images pass through our data augmentation layer
  149. Урок 149. 00:15:56
    Building Model 1 (with a data augmentation layer and 1% of training data)
  150. Урок 150. 00:16:38
    Building Model 2 (with a data augmentation layer and 10% of training data)
  151. Урок 151. 00:07:26
    Creating a ModelCheckpoint to save our model's weights during training
  152. Урок 152. 00:07:15
    Fitting and evaluating Model 2 (and saving its weights using ModelCheckpoint)
  153. Урок 153. 00:07:18
    Loading and comparing saved weights to our existing trained Model 2
  154. Урок 154. 00:20:27
    Preparing Model 3 (our first fine-tuned model)
  155. Урок 155. 00:07:46
    Fitting and evaluating Model 3 (our first fine-tuned model)
  156. Урок 156. 00:10:27
    Comparing our model's results before and after fine-tuning
  157. Урок 157. 00:06:25
    Downloading and preparing data for our biggest experiment yet (Model 4)
  158. Урок 158. 00:12:01
    Preparing our final modelling experiment (Model 4)
  159. Урок 159. 00:10:20
    Fine-tuning Model 4 on 100% of the training data and evaluating its results
  160. Урок 160. 00:10:47
    Comparing our modelling experiment results in TensorBoard
  161. Урок 161. 00:02:05
    How to view and delete previous TensorBoard experiments
  162. Урок 162. 00:06:20
    Introduction to Transfer Learning Part 3: Scaling Up
  163. Урок 163. 00:13:35
    Getting helper functions ready and downloading data to model
  164. Урок 164. 00:05:39
    Outlining the model we're going to build and building a ModelCheckpoint callback
  165. Урок 165. 00:04:40
    Creating a data augmentation layer to use with our model
  166. Урок 166. 00:08:59
    Creating a headless EfficientNetB0 model with data augmentation built in
  167. Урок 167. 00:07:57
    Fitting and evaluating our biggest transfer learning model yet
  168. Урок 168. 00:11:29
    Unfreezing some layers in our base model to prepare for fine-tuning
  169. Урок 169. 00:08:24
    Fine-tuning our feature extraction model and evaluating its performance
  170. Урок 170. 00:06:26
    Saving and loading our trained model
  171. Урок 171. 00:06:35
    Downloading a pretrained model to make and evaluate predictions with
  172. Урок 172. 00:12:47
    Making predictions with our trained model on 25,250 test samples
  173. Урок 173. 00:06:06
    Unravelling our test dataset for comparing ground truth labels to predictions
  174. Урок 174. 00:05:18
    Confirming our model's predictions are in the same order as the test labels
  175. Урок 175. 00:12:08
    Creating a confusion matrix for our model's 101 different classes
  176. Урок 176. 00:14:17
    Evaluating every individual class in our dataset
  177. Урок 177. 00:07:37
    Plotting our model's F1-scores for each separate class
  178. Урок 178. 00:12:09
    Creating a function to load and prepare images for making predictions
  179. Урок 179. 00:16:07
    Making predictions on our test images and evaluating them
  180. Урок 180. 00:06:10
    Discussing the benefits of finding your model's most wrong predictions
  181. Урок 181. 00:11:17
    Writing code to uncover our model's most wrong predictions
  182. Урок 182. 00:10:37
    Plotting and visualizing the samples our model got most wrong
  183. Урок 183. 00:09:50
    Making predictions on and plotting our own custom images
  184. Урок 184. 00:05:45
    Introduction to Milestone Project 1: Food Vision Big™
  185. Урок 185. 00:10:18
    Making sure we have access to the right GPU for mixed precision training
  186. Урок 186. 00:03:07
    Getting helper functions ready
  187. Урок 187. 00:12:04
    Introduction to TensorFlow Datasets (TFDS)
  188. Урок 188. 00:15:57
    Exploring and becoming one with the data (Food101 from TensorFlow Datasets)
  189. Урок 189. 00:15:51
    Creating a preprocessing function to prepare our data for modelling
  190. Урок 190. 00:13:48
    Batching and preparing our datasets (to make them run fast)
  191. Урок 191. 00:06:50
    Exploring what happens when we batch and prefetch our data
  192. Урок 192. 00:07:15
    Creating modelling callbacks for our feature extraction model
  193. Урок 193. 00:10:06
    Turning on mixed precision training with TensorFlow
  194. Урок 194. 00:12:43
    Creating a feature extraction model capable of using mixed precision training
  195. Урок 195. 00:07:57
    Checking to see if our model is using mixed precision training layer by layer
  196. Урок 196. 00:10:20
    Training and evaluating a feature extraction model (Food Vision Big™)
  197. Урок 197. 00:07:48
    Introducing your Milestone Project 1 challenge: build a model to beat DeepFood
  198. Урок 198. 00:12:52
    Introduction to Natural Language Processing (NLP) and Sequence Problems
  199. Урок 199. 00:07:23
    Example NLP inputs and outputs
  200. Урок 200. 00:09:04
    The typical architecture of a Recurrent Neural Network (RNN)
  201. Урок 201. 00:08:53
    Preparing a notebook for our first NLP with TensorFlow project
  202. Урок 202. 00:16:42
    Becoming one with the data and visualizing a text dataset
  203. Урок 203. 00:06:27
    Splitting data into training and validation sets
  204. Урок 204. 00:09:23
    Converting text data to numbers using tokenisation and embeddings (overview)
  205. Урок 205. 00:17:11
    Setting up a TensorFlow TextVectorization layer to convert text to numbers
  206. Урок 206. 00:11:03
    Mapping the TextVectorization layer to text data and turning it into numbers
  207. Урок 207. 00:12:28
    Creating an Embedding layer to turn tokenised text into embedding vectors
  208. Урок 208. 00:08:58
    Discussing the various modelling experiments we're going to run
  209. Урок 209. 00:09:26
    Model 0: Building a baseline model to try and improve upon
  210. Урок 210. 00:12:15
    Creating a function to track and evaluate our model's results
  211. Урок 211. 00:20:52
    Model 1: Building, fitting and evaluating our first deep model on text data
  212. Урок 212. 00:20:44
    Visualizing our model's learned word embeddings with TensorFlow's projector tool
  213. Урок 213. 00:09:35
    High-level overview of Recurrent Neural Networks (RNNs) + where to learn more
  214. Урок 214. 00:18:17
    Model 2: Building, fitting and evaluating our first TensorFlow RNN model (LSTM)
  215. Урок 215. 00:16:57
    Model 3: Building, fitting and evaluating a GRU-cell powered RNN
  216. Урок 216. 00:19:35
    Model 4: Building, fitting and evaluating a bidirectional RNN model
  217. Урок 217. 00:19:32
    Discussing the intuition behind Conv1D neural networks for text and sequences
  218. Урок 218. 00:09:58
    Model 5: Building, fitting and evaluating a 1D CNN for text
  219. Урок 219. 00:13:46
    Using TensorFlow Hub for pretrained word embeddings (transfer learning for NLP)
  220. Урок 220. 00:10:46
    Model 6: Building, training and evaluating a transfer learning model for NLP
  221. Урок 221. 00:10:53
    Preparing subsets of data for model 7 (same as model 6 but 10% of data)
  222. Урок 222. 00:10:05
    Model 7: Building, training and evaluating a transfer learning model on 10% data
  223. Урок 223. 00:13:43
    Fixing our data leakage issue with model 7 and retraining it
  224. Урок 224. 00:13:15
    Comparing all our modelling experiments evaluation metrics
  225. Урок 225. 00:11:15
    Uploading our model's training logs to TensorBoard and comparing them
  226. Урок 226. 00:10:26
    Saving and loading in a trained NLP model with TensorFlow
  227. Урок 227. 00:13:25
    Downloading a pretrained model and preparing data to investigate predictions
  228. Урок 228. 00:08:29
    Visualizing our model's most wrong predictions
  229. Урок 229. 00:08:28
    Making and visualizing predictions on the test dataset
  230. Урок 230. 00:15:02
    Understanding the concept of the speed/score tradeoff
  231. Урок 231. 00:14:21
    Introduction to Milestone Project 2: SkimLit
  232. Урок 232. 00:07:23
    What we're going to cover in Milestone Project 2 (NLP for medical abstracts)
  233. Урок 233. 00:11:03
    SkimLit inputs and outputs
  234. Урок 234. 00:14:59
    Setting up our notebook for Milestone Project 2 (getting the data)
  235. Урок 235. 00:13:19
    Visualizing examples from the dataset (becoming one with the data)
  236. Урок 236. 00:19:51
    Writing a preprocessing function to structure our data for modelling
  237. Урок 237. 00:07:56
    Performing visual data analysis on our preprocessed text
  238. Урок 238. 00:13:16
    Turning our target labels into numbers (ML models require numbers)
  239. Урок 239. 00:09:26
    Model 0: Creating, fitting and evaluating a baseline model for SkimLit
  240. Урок 240. 00:09:56
    Preparing our data for deep sequence models
  241. Урок 241. 00:14:08
    Creating a text vectoriser to map our tokens (text) to numbers
  242. Урок 242. 00:09:15
    Creating a custom token embedding layer with TensorFlow
  243. Урок 243. 00:09:50
    Creating fast loading dataset with the TensorFlow tf.data API
  244. Урок 244. 00:17:22
    Model 1: Building, fitting and evaluating a Conv1D with token embeddings
  245. Урок 245. 00:10:54
    Preparing a pretrained embedding layer from TensorFlow Hub for Model 2
  246. Урок 246. 00:11:31
    Model 2: Building, fitting and evaluating a Conv1D model with token embeddings
  247. Урок 247. 00:23:25
    Creating a character-level tokeniser with TensorFlow's TextVectorization layer
  248. Урок 248. 00:07:45
    Creating a character-level embedding layer with tf.keras.layers.Embedding
  249. Урок 249. 00:13:46
    Model 3: Building, fitting and evaluating a Conv1D model on character embeddings
  250. Урок 250. 00:06:05
    Discussing how we're going to build Model 4 (character + token embeddings)
  251. Урок 251. 00:15:37
    Model 4: Building a multi-input model (hybrid token + character embeddings)
  252. Урок 252. 00:07:33
    Model 4: Plotting and visually exploring different data inputs
  253. Урок 253. 00:08:42
    Crafting multi-input fast loading tf.data datasets for Model 4
  254. Урок 254. 00:13:19
    Model 4: Building, fitting and evaluating a hybrid embedding model
  255. Урок 255. 00:07:19
    Model 5: Adding positional embeddings via feature engineering (overview)
  256. Урок 256. 00:12:26
    Encoding the line number feature to used with Model 5
  257. Урок 257. 00:07:57
    Encoding the total lines feature to be used with Model 5
  258. Урок 258. 00:09:20
    Model 5: Building the foundations of a tribrid embedding model
  259. Урок 259. 00:14:09
    Model 5: Completing the build of a tribrid embedding model for sequences
  260. Урок 260. 00:10:26
    Visually inspecting the architecture of our tribrid embedding model
  261. Урок 261. 00:09:01
    Creating multi-level data input pipelines for Model 5 with the tf.data API
  262. Урок 262. 00:10:36
    Bringing SkimLit to life!!! (fitting and evaluating Model 5)
  263. Урок 263. 00:09:37
    Comparing the performance of all of our modelling experiments
  264. Урок 264. 00:07:49
    Saving, loading & testing our best performing model
  265. Урок 265. 00:12:34
    Congratulations and your challenge before heading to the next module
  266. Урок 266. 00:03:54
    Introduction to Milestone Project 3 (BitPredict) & where you can get help
  267. Урок 267. 00:07:47
    What is a time series problem and example forecasting problems at Uber
  268. Урок 268. 00:04:53
    Example forecasting problems in daily life
  269. Урок 269. 00:07:58
    What can be forecast?
  270. Урок 270. 00:02:36
    What we're going to cover (broadly)
  271. Урок 271. 00:08:56
    Time series forecasting inputs and outputs
  272. Урок 272. 00:14:59
    Downloading and inspecting our Bitcoin historical dataset
  273. Урок 273. 00:07:40
    Different kinds of time series patterns & different amounts of feature variables
  274. Урок 274. 00:04:53
    Visualizing our Bitcoin historical data with pandas
  275. Урок 275. 00:10:59
    Reading in our Bitcoin data with Python's CSV module
  276. Урок 276. 00:08:38
    Creating train and test splits for time series (the wrong way)
  277. Урок 277. 00:07:13
    Creating train and test splits for time series (the right way)
  278. Урок 278. 00:07:58
    Creating a plotting function to visualize our time series data
  279. Урок 279. 00:09:12
    Discussing the various modelling experiments were going to be running
  280. Урок 280. 00:12:17
    Model 0: Making and visualizing a naive forecast model
  281. Урок 281. 00:11:12
    Discussing some of the most common time series evaluation metrics
  282. Урок 282. 00:09:39
    Implementing MASE with TensorFlow
  283. Урок 283. 00:10:12
    Creating a function to evaluate our model's forecasts with various metrics
  284. Урок 284. 00:05:07
    Discussing other non-TensorFlow kinds of time series forecasting models
  285. Урок 285. 00:13:02
    Formatting data Part 2: Creating a function to label our windowed time series
  286. Урок 286. 00:07:51
    Discussing the use of windows and horizons in time series data
  287. Урок 287. 00:23:36
    Writing a preprocessing function to turn time series data into windows & labels
  288. Урок 288. 00:10:02
    Turning our windowed time series data into training and test sets
  289. Урок 289. 00:07:26
    Creating a modelling checkpoint callback to save our best performing model
  290. Урок 290. 00:16:59
    Model 1: Building, compiling and fitting a deep learning model on Bitcoin data
  291. Урок 291. 00:14:04
    Creating a function to make predictions with our trained models
  292. Урок 292. 00:17:44
    Model 2: Building, fitting and evaluating a deep model with a larger window size-27
  293. Урок 293. 00:13:16
    Model 3: Building, fitting and evaluating a model with a larger horizon size
  294. Урок 294. 00:08:35
    Adjusting the evaluation function to work for predictions with larger horizons
  295. Урок 295. 00:08:45
    Model 3: Visualizing the results
  296. Урок 296. 00:09:45
    Comparing our modelling experiments so far and discussing autocorrelation
  297. Урок 297. 00:13:22
    Preparing data for building a Conv1D model
  298. Урок 298. 00:14:52
    Model 4: Building, fitting and evaluating a Conv1D model on our Bitcoin data
  299. Урок 299. 00:16:06
    Model 5: Building, fitting and evaluating a LSTM (RNN) model on our Bitcoin data
  300. Урок 300. 00:13:53
    Investigating how to turn our univariate time series into multivariate
  301. Урок 301. 00:12:13
    Creating and plotting a multivariate time series with BTC price and block reward
  302. Урок 302. 00:13:38
    Preparing our multivariate time series for a model
  303. Урок 303. 00:09:26
    Model 6: Building, fitting and evaluating a multivariate time series model
  304. Урок 304. 00:09:40
    Model 7: Discussing what we're going to be doing with the N-BEATS algorithm
  305. Урок 305. 00:18:39
    Model 7: Replicating the N-BEATS basic block with TensorFlow layer subclassing
  306. Урок 306. 00:15:03
    Model 7: Testing our N-BEATS block implementation with dummy data inputs
  307. Урок 307. 00:08:51
    Model 7: Setting up hyperparameters for the N-BEATS algorithm
  308. Урок 308. 00:12:56
    Model 7: Getting ready for residual connections
  309. Урок 309. 00:10:06
    Model 7: Outlining the steps we're going to take to build the N-BEATS model
  310. Урок 310. 00:22:23
    Model 7: Putting together the pieces of the puzzle of the N-BEATS model
  311. Урок 311. 00:06:47
    Model 7: Plotting the N-BEATS algorithm we've created and admiring its beauty
  312. Урок 312. 00:04:44
    Model 8: Ensemble model overview
  313. Урок 313. 00:20:05
    Model 8: Building, compiling and fitting an ensemble of models
  314. Урок 314. 00:16:10
    Model 8: Making and evaluating predictions with our ensemble model
  315. Урок 315. 00:12:57
    Discussing the importance of prediction intervals in forecasting
  316. Урок 316. 00:07:58
    Getting the upper and lower bounds of our prediction intervals
  317. Урок 317. 00:13:03
    Plotting the prediction intervals of our ensemble model predictions
  318. Урок 318. 00:13:42
    (Optional) Discussing the types of uncertainty in machine learning
  319. Урок 319. 00:08:25
    Model 9: Preparing data to create a model capable of predicting into the future
  320. Урок 320. 00:05:02
    Model 9: Building, compiling and fitting a future predictions model
  321. Урок 321. 00:08:31
    Model 9: Discussing what's required for our model to make future predictions
  322. Урок 322. 00:12:09
    Model 9: Creating a function to make forecasts into the future
  323. Урок 323. 00:13:10
    Model 9: Plotting our model's future forecasts
  324. Урок 324. 00:14:16
    Model 10: Introducing the turkey problem and making data for it
  325. Урок 325. 00:13:39
    Model 10: Building a model to predict on turkey data (why forecasting is BS)
  326. Урок 326. 00:13:00
    Comparing the results of all of our models and discussing where to go next
  327. Урок 327. 00:05:29
    What is the TensorFlow Developer Certification?
  328. Урок 328. 00:06:58
    Why the TensorFlow Developer Certification?
  329. Урок 329. 00:08:15
    How to prepare (your brain) for the TensorFlow Developer Certification
  330. Урок 330. 00:12:44
    How to prepare (your computer) for the TensorFlow Developer Certification
  331. Урок 331. 00:02:14
    What to do after the TensorFlow Developer Certification exam
  332. Урок 332. 00:04:52
    What is Machine Learning?
  333. Урок 333. 00:06:53
    AI/Machine Learning/Data Science
  334. Урок 334. 00:06:17
    Exercise: Machine Learning Playground
  335. Урок 335. 00:06:04
    How Did We Get Here?
  336. Урок 336. 00:04:25
    Exercise: YouTube Recommendation Engine
  337. Урок 337. 00:04:42
    Types of Machine Learning
  338. Урок 338. 00:04:45
    What Is Machine Learning? Round 2
  339. Урок 339. 00:01:49
    Section Review
  340. Урок 340. 00:02:39
    Section Overview
  341. Урок 341. 00:03:09
    Introducing Our Framework
  342. Урок 342. 00:05:00
    6 Step Machine Learning Framework
  343. Урок 343. 00:10:33
    Types of Machine Learning Problems
  344. Урок 344. 00:04:51
    Types of Data
  345. Урок 345. 00:03:32
    Types of Evaluation
  346. Урок 346. 00:05:59
    Features In Data
  347. Урок 347. 00:05:23
    Modelling - Splitting Data
  348. Урок 348. 00:04:36
    Modelling - Picking the Model
  349. Урок 349. 00:03:18
    Modelling - Tuning
  350. Урок 350. 00:03:36
    Modelling - Comparison
  351. Урок 351. 00:09:33
    Experimentation
  352. Урок 352. 00:04:01
    Tools We Will Use
  353. Урок 353. 00:02:28
    Section Overview
  354. Урок 354. 00:04:30
    Pandas Introduction
  355. Урок 355. 00:13:22
    Series, Data Frames and CSVs
  356. Урок 356. 00:09:49
    Describing Data with Pandas
  357. Урок 357. 00:11:09
    Selecting and Viewing Data with Pandas
  358. Урок 358. 00:13:07
    Selecting and Viewing Data with Pandas Part 2
  359. Урок 359. 00:13:57
    Manipulating Data
  360. Урок 360. 00:09:57
    Manipulating Data 2
  361. Урок 361. 00:10:13
    Manipulating Data 3
  362. Урок 362. 00:02:41
    How To Download The Course Assignments
  363. Урок 363. 00:07:44
    Section Overview
  364. Урок 364. 00:05:18
    NumPy Introduction
  365. Урок 365. 00:14:06
    NumPy DataTypes and Attributes
  366. Урок 366. 00:09:23
    Creating NumPy Arrays
  367. Урок 367. 00:07:18
    NumPy Random Seed
  368. Урок 368. 00:09:36
    Viewing Arrays and Matrices
  369. Урок 369. 00:11:32
    Manipulating Arrays
  370. Урок 370. 00:09:45
    Manipulating Arrays 2
  371. Урок 371. 00:07:11
    Standard Deviation and Variance
  372. Урок 372. 00:07:27
    Reshape and Transpose
  373. Урок 373. 00:11:46
    Dot Product vs Element Wise
  374. Урок 374. 00:03:34
    Exercise: Nut Butter Store Sales
  375. Урок 375. 00:13:05
    Comparison Operators
  376. Урок 376. 00:06:20
    Sorting Arrays
  377. Урок 377. 00:07:38
    Turn Images Into NumPy Arrays