Этот материал находится в платной подписке. Оформи премиум подписку и смотри или слушай PyTorch for Deep Learning, а также все другие курсы, прямо сейчас!
• Урок 1. 00:03:34
PyTorch for Deep Learning
• Урок 2. 00:05:54
Course Welcome and What Is Deep Learning
• Урок 3. 00:03:34
Why Use Machine Learning or Deep Learning
• Урок 4. 00:05:40
The Number 1 Rule of Machine Learning and What Is Deep Learning Good For
• Урок 5. 00:06:07
Machine Learning vs. Deep Learning
• Урок 6. 00:09:22
Anatomy of Neural Networks
• Урок 7. 00:04:31
• Урок 8. 00:06:22
What Can Deep Learning Be Used For
• Урок 9. 00:10:13
What Is and Why PyTorch
• Урок 10. 00:04:16
What Are Tensors
• Урок 11. 00:06:06
What We Are Going To Cover With PyTorch
• Урок 12. 00:05:10
How To and How Not To Approach This Course
• Урок 13. 00:05:22
Important Resources For This Course
• Урок 14. 00:07:40
Getting Setup to Write PyTorch Code
• Урок 15. 00:13:26
Introduction to PyTorch Tensors
• Урок 16. 00:09:59
Creating Random Tensors in PyTorch
• Урок 17. 00:03:09
Creating Tensors With Zeros and Ones in PyTorch
• Урок 18. 00:05:18
Creating a Tensor Range and Tensors Like Other Tensors
• Урок 19. 00:09:25
Dealing With Tensor Data Types
• Урок 20. 00:08:23
Getting Tensor Attributes
• Урок 21. 00:06:00
Manipulating Tensors (Tensor Operations)
• Урок 22. 00:09:35
Matrix Multiplication (Part 1)
• Урок 23. 00:07:52
Matrix Multiplication (Part 2): The Two Main Rules of Matrix Multiplication
• Урок 24. 00:12:58
Matrix Multiplication (Part 3): Dealing With Tensor Shape Errors
• Урок 25. 00:06:10
Finding the Min Max Mean and Sum of Tensors (Tensor Aggregation)
• Урок 26. 00:03:17
Finding The Positional Min and Max of Tensors
• Урок 27. 00:13:41
Reshaping, Viewing and Stacking Tensors
• Урок 28. 00:11:56
Squeezing, Unsqueezing and Permuting Tensors
• Урок 29. 00:09:32
Selecting Data From Tensors (Indexing)
• Урок 30. 00:09:09
PyTorch Tensors and NumPy
• Урок 31. 00:10:47
PyTorch Reproducibility (Taking the Random Out of Random)
• Урок 32. 00:11:51
Different Ways of Accessing a GPU in PyTorch
• Урок 33. 00:07:44
Setting up Device Agnostic Code and Putting Tensors On and Off the GPU
• Урок 34. 00:04:50
PyTorch Fundamentals: Exercises and Extra-Curriculum
• Урок 35. 00:02:46
Introduction and Where You Can Get Help
• Урок 36. 00:07:15
Getting Setup and What We Are Covering
• Урок 37. 00:09:42
Creating a Simple Dataset Using the Linear Regression Formula
• Урок 38. 00:08:21
Splitting Our Data Into Training and Test Sets
• Урок 39. 00:07:46
Building a function to Visualize Our Data
• Урок 40. 00:14:10
Creating Our First PyTorch Model for Linear Regression
• Урок 41. 00:06:11
Breaking Down What's Happening in Our PyTorch Linear regression Model
• Урок 42. 00:06:27
Discussing Some of the Most Important PyTorch Model Building Classes
• Урок 43. 00:09:51
Checking Out the Internals of Our PyTorch Model
• Урок 44. 00:11:13
Making Predictions With Our Random Model Using Inference Mode
• Урок 45. 00:08:15
Training a Model Intuition (The Things We Need)
• Урок 46. 00:12:52
Setting Up an Optimizer and a Loss Function
• Урок 47. 00:13:54
PyTorch Training Loop Steps and Intuition
• Урок 48. 00:08:47
Writing Code for a PyTorch Training Loop
• Урок 49. 00:14:58
Reviewing the Steps in a Training Loop Step by Step
• Урок 50. 00:09:26
Running Our Training Loop Epoch by Epoch and Seeing What Happens
• Урок 51. 00:11:38
Writing Testing Loop Code and Discussing What's Happening Step by Step
• Урок 52. 00:14:43
Reviewing What Happens in a Testing Loop Step by Step
• Урок 53. 00:13:46
Writing Code to Save a PyTorch Model
• Урок 54. 00:08:45
Writing Code to Load a PyTorch Model
• Урок 55. 00:06:03
Setting Up to Practice Everything We Have Done Using Device-Agnostic Code
• Урок 56. 00:06:09
Putting Everything Together (Part 1): Data
• Урок 57. 00:10:08
Putting Everything Together (Part 2): Building a Model
• Урок 58. 00:12:41
Putting Everything Together (Part 3): Training a Model
• Урок 59. 00:05:18
Putting Everything Together (Part 4): Making Predictions With a Trained Model
• Урок 60. 00:09:11
• Урок 61. 00:02:57
Exercise: Imposter Syndrome
• Урок 62. 00:03:58
PyTorch Workflow: Exercises and Extra-Curriculum
• Урок 63. 00:09:42
Introduction to Machine Learning Classification With PyTorch
• Урок 64. 00:09:08
Classification Problem Example: Input and Output Shapes
• Урок 65. 00:06:32
Typical Architecture of a Classification Neural Network (Overview)
• Урок 66. 00:12:19
Making a Toy Classification Dataset
• Урок 67. 00:11:56
Turning Our Data into Tensors and Making a Training and Test Split
• Урок 68. 00:04:20
Laying Out Steps for Modelling and Setting Up Device-Agnostic Code
• Урок 69. 00:10:58
Coding a Small Neural Network to Handle Our Classification Data
• Урок 70. 00:06:58
Making Our Neural Network Visual
• Урок 71. 00:13:18
Recreating and Exploring the Insides of Our Model Using nn.Sequential
• Урок 72. 00:14:51
Setting Up a Loss Function Optimizer and Evaluation Function for Our Classification Network
• Урок 73. 00:16:08
Going from Model Logits to Prediction Probabilities to Prediction Labels
• Урок 74. 00:15:27
Coding a Training and Testing Optimization Loop for Our Classification Model
• Урок 75. 00:14:14
Writing Code to Download a Helper Function to Visualize Our Models Predictions
• Урок 76. 00:08:03
Discussing Options to Improve a Model
• Урок 77. 00:09:07
Creating a New Model with More Layers and Hidden Units
• Урок 78. 00:12:46
Writing Training and Testing Code to See if Our New and Upgraded Model Performs Better
• Урок 79. 00:08:08
Creating a Straight Line Dataset to See if Our Model is Learning Anything
• Урок 80. 00:10:02
Building and Training a Model to Fit on Straight Line Data
• Урок 81. 00:05:24
Evaluating Our Models Predictions on Straight Line Data
• Урок 82. 00:10:01
Introducing the Missing Piece for Our Classification Model Non-Linearity
• Урок 83. 00:10:26
Building Our First Neural Network with Non-Linearity
• Урок 84. 00:15:13
Writing Training and Testing Code for Our First Non-Linear Model
• Урок 85. 00:05:48
Making Predictions with and Evaluating Our First Non-Linear Model
• Урок 86. 00:09:35
Replicating Non-Linear Activation Functions with Pure PyTorch
• Урок 87. 00:11:25
Putting It All Together (Part 1): Building a Multiclass Dataset
• Урок 88. 00:12:28
Creating a Multi-Class Classification Model with PyTorch
• Урок 89. 00:06:41
Setting Up a Loss Function and Optimizer for Our Multi-Class Model
• Урок 90. 00:11:03
Going from Logits to Prediction Probabilities to Prediction Labels with a Multi-Class Model
• Урок 91. 00:16:18
Training a Multi-Class Classification Model and Troubleshooting Code on the Fly
• Урок 92. 00:08:00
Making Predictions with and Evaluating Our Multi-Class Classification Model
• Урок 93. 00:09:18
Discussing a Few More Classification Metrics
• Урок 94. 00:02:59
PyTorch Classification: Exercises and Extra-Curriculum
• Урок 95. 00:11:48
What Is a Computer Vision Problem and What We Are Going to Cover
• Урок 96. 00:10:09
Computer Vision Input and Output Shapes
• Урок 97. 00:05:03
What Is a Convolutional Neural Network (CNN)
• Урок 98. 00:09:20
Discussing and Importing the Base Computer Vision Libraries in PyTorch
• Урок 99. 00:14:31
Getting a Computer Vision Dataset and Checking Out Its- Input and Output Shapes
• Урок 100. 00:09:52
Visualizing Random Samples of Data
• Урок 101. 00:07:18
• Урок 102. 00:12:24
• Урок 103. 00:14:39
Model 0: Creating a Baseline Model with Two Linear Layers
• Урок 104. 00:10:30
Creating a Loss Function: an Optimizer for Model 0
• Урок 105. 00:05:35
Creating a Function to Time Our Modelling Code
• Урок 106. 00:21:26
Writing Training and Testing Loops for Our Batched Data
• Урок 107. 00:12:59
Writing an Evaluation Function to Get Our Models Results
• Урок 108. 00:03:47
Setup Device-Agnostic Code for Running Experiments on the GPU
• Урок 109. 00:09:04
Model 1: Creating a Model with Non-Linear Functions
• Урок 110. 00:03:05
Mode 1: Creating a Loss Function and Optimizer
• Урок 111. 00:08:29
Turing Our Training Loop into a Function
• Урок 112. 00:06:36
Turing Our Testing Loop into a Function
• Урок 113. 00:11:53
Training and Testing Model 1 with Our Training and Testing Functions
• Урок 114. 00:04:09
Getting a Results Dictionary for Model 1
• Урок 115. 00:08:25
Model 2: Convolutional Neural Networks High Level Overview
• Урок 116. 00:19:49
Model 2: Coding Our First Convolutional Neural Network with PyTorch
• Урок 117. 00:15:00
Model 2: Breaking Down Conv2D Step by Step
• Урок 118. 00:15:49
Model 2: Breaking Down MaxPool2D Step by Step
• Урок 119. 00:13:46
Mode 2: Using a Trick to Find the Input and Output Shapes of Each of Our Layers
• Урок 120. 00:02:39
Model 2: Setting Up a Loss Function and Optimizer
• Урок 121. 00:07:55
Model 2: Training Our First CNN and Evaluating Its Results
• Урок 122. 00:07:24
Comparing the Results of Our Modelling Experiments
• Урок 123. 00:11:40
Making Predictions on Random Test Samples with the Best Trained Model
• Урок 124. 00:08:11
Plotting Our Best Model Predictions on Random Test Samples and Evaluating Them
• Урок 125. 00:15:21
Making Predictions Across the Whole Test Dataset and Importing Libraries to Plot a Confusion Matrix
• Урок 126. 00:06:55
Evaluating Our Best Models Predictions with a Confusion Matrix
• Урок 127. 00:11:28
• Урок 128. 00:06:02
Recapping What We Have Covered Plus Exercises and Extra-Curriculum
• Урок 129. 00:09:54
What Is a Custom Dataset and What We Are Going to Cover
• Урок 130. 00:05:55
Importing PyTorch and Setting Up Device-Agnostic Code
• Урок 131. 00:14:05
• Урок 132. 00:08:42
Becoming One With the Data (Part 1): Exploring the Data Format
• Урок 133. 00:11:41
Becoming One With the Data (Part 2): Visualizing a Random Image
• Урок 134. 00:04:48
Becoming One With the Data (Part 3): Visualizing a Random Image with Matplotlib
• Урок 135. 00:08:54
Transforming Data (Part 1): Turning Images Into Tensors
• Урок 136. 00:11:31
Transforming Data (Part 2): Visualizing Transformed Images
• Урок 137. 00:09:19
Loading All of Our Images and Turning Them Into Tensors With ImageFolder
• Урок 138. 00:07:19
Visualizing a Loaded Image From the Train Dataset
• Урок 139. 00:09:04
Turning Our Image Datasets into PyTorch DataLoaders
• Урок 140. 00:08:01
Creating a Custom Dataset Class in PyTorch High Level Overview
• Урок 141. 00:09:07
Creating a Helper Function to Get Class Names From a Directory
• Урок 142. 00:17:47
Writing a PyTorch Custom Dataset Class from Scratch to Load Our Images
• Урок 143. 00:07:14
Compare Our Custom Dataset Class to the Original ImageFolder Class
• Урок 144. 00:14:19
Writing a Helper Function to Visualize Random Images from Our Custom Dataset
• Урок 145. 00:07:00
Turning Our Custom Datasets Into DataLoaders
• Урок 146. 00:14:24
Exploring State of the Art Data Augmentation With Torchvision Transforms
• Урок 147. 00:08:16
• Урок 148. 00:11:25
Building a Baseline Model (Part 2): Replicating Tiny VGG from Scratch
• Урок 149. 00:08:10
Building a Baseline Model (Part 3): Doing a Forward Pass to Test Our Model Shapes
• Урок 150. 00:06:39
Using the Torchinfo Package to Get a Summary of Our Model
• Урок 151. 00:13:04
Creating Training and Testing loop Functions
• Урок 152. 00:10:15
Creating a Train Function to Train and Evaluate Our Models
• Урок 153. 00:09:54
Training and Evaluating Model 0 With Our Training Functions
• Урок 154. 00:09:03
Plotting the Loss Curves of Model 0
• Урок 155. 00:14:14
Discussing the Balance Between Overfitting and Underfitting and How to Deal With Each
• Урок 156. 00:11:04
Creating Augmented Training Datasets and DataLoaders for Model 1
• Урок 157. 00:07:11
Constructing and Training Model 1
• Урок 158. 00:03:23
Plotting the Loss Curves of Model 1
• Урок 159. 00:10:56
Plotting the Loss Curves of All of Our Models Against Each Other
• Урок 160. 00:05:33
• Урок 161. 00:07:01
Predicting on Custom Data (Part2): Loading In a Custom Image With PyTorch
• Урок 162. 00:14:08
Predicting on Custom Data (Part 3): Getting Our Custom Image Into the Right Format
• Урок 163. 00:04:25
Predicting on Custom Data (Part 4): Turning Our Models Raw Outputs Into Prediction Labels
• Урок 164. 00:12:48
Predicting on Custom Data (Part 5): Putting It All Together
• Урок 165. 00:06:05
Summary of What We Have Covered Plus Exercises and Extra-Curriculum
• Урок 166. 00:11:35
What Is Going Modular and What We Are Going to Cover
• Урок 167. 00:07:41
Going Modular Notebook (Part 1): Running It End to End
• Урок 168. 00:04:51
• Урок 169. 00:13:51
Writing the Outline for Our First Python Script to Setup the Data
• Урок 170. 00:10:36
Creating a Python Script to Create Our PyTorch DataLoaders
• Урок 171. 00:09:19
Turning Our Model Building Code into a Python Script
• Урок 172. 00:06:17
Turning Our Model Training Code into a Python Script
• Урок 173. 00:06:08
Turning Our Utility Function to Save a Model into a Python Script
• Урок 174. 00:15:47
Creating a Training Script to Train Our Model in One Line of Code
• Урок 175. 00:06:00
Going Modular: Summary, Exercises and Extra-Curriculum
• Урок 176. 00:10:06
Introduction: What is Transfer Learning and Why Use It
• Урок 177. 00:05:13
Where Can You Find Pretrained Models and What We Are Going to Cover
• Урок 178. 00:08:06
• Урок 179. 00:06:42
• Урок 180. 00:08:01
• Урок 181. 00:14:41
Turning Our Data into DataLoaders with Manually Created Transforms
• Урок 182. 00:13:07
Turning Our Data into DataLoaders with Automatic Created Transforms
• Урок 183. 00:12:16
Which Pretrained Model Should You Use
• Урок 184. 00:10:57
Setting Up a Pretrained Model with Torchvision
• Урок 185. 00:07:12
Different Kinds of Transfer Learning
• Урок 186. 00:06:50
Getting a Summary of the Different Layers of Our Model
• Урок 187. 00:13:27
Freezing the Base Layers of Our Model and Updating the Classifier Head
• Урок 188. 00:07:55
Training Our First Transfer Learning Feature Extractor Model
• Урок 189. 00:06:27
Plotting the Loss Curves of Our Transfer Learning Model
• Урок 190. 00:07:58
Outlining the Steps to Make Predictions on the Test Images
• Урок 191. 00:10:01
Creating a Function Predict On and Plot Images
• Урок 192. 00:07:24
Making and Plotting Predictions on Test Images
• Урок 193. 00:06:22
Making a Prediction on a Custom Image
• Урок 194. 00:03:22
Main Takeaways, Exercises and Extra Curriculum
• Урок 195. 00:07:07
What Is Experiment Tracking and Why Track Experiments
• Урок 196. 00:08:14
Getting Setup by Importing Torch Libraries and Going Modular Code
• Урок 197. 00:10:24
• Урок 198. 00:08:31
Turning Our Data into DataLoaders Using Manual Transforms
• Урок 199. 00:07:48
Turning Our Data into DataLoaders Using Automatic Transforms
• Урок 200. 00:10:29
Preparing a Pretrained Model for Our Own Problem
• Урок 201. 00:13:36
Setting Up a Way to Track a Single Model Experiment with TensorBoard
• Урок 202. 00:04:39
Training a Single Model and Saving the Results to TensorBoard
• Урок 203. 00:10:18
Exploring Our Single Models Results with TensorBoard
• Урок 204. 00:10:45
Creating a Function to Create SummaryWriter Instances
• Урок 205. 00:04:58
Adapting Our Train Function to Be Able to Track Multiple Experiments
• Урок 206. 00:06:00
What Experiments Should You Try
• Урок 207. 00:06:02
Discussing the Experiments We Are Going to Try
• Урок 208. 00:06:32
• Урок 209. 00:08:29
• Урок 210. 00:15:55
Creating Functions to Prepare Our Feature Extractor Models
• Урок 211. 00:14:28
Coding Out the Steps to Run a Series of Modelling Experiments
• Урок 212. 00:03:51
Running Eight Different Modelling Experiments in 5 Minutes
• Урок 213. 00:13:39
Viewing Our Modelling Experiments in TensorBoard
• Урок 214. 00:10:33
Loading In the Best Model and Making Predictions on Random Images from the Test Set
• Урок 215. 00:03:45
Making a Prediction on Our Own Custom Image with the Best Model
• Урок 216. 00:03:57
Main Takeaways, Exercises and Extra Curriculum
• Урок 217. 00:07:35
What Is a Machine Learning Research Paper?
• Урок 218. 00:03:14
Why Replicate a Machine Learning Research Paper?
• Урок 219. 00:08:19
Where Can You Find Machine Learning Research Papers and Code?
• Урок 220. 00:08:22
What We Are Going to Cover
• Урок 221. 00:08:22
Getting Setup for Coding in Google Colab
• Урок 222. 00:04:03
• Урок 223. 00:09:48
Turning Our Food Vision Mini Images into PyTorch DataLoaders
• Урок 224. 00:03:46
Visualizing a Single Image
• Урок 225. 00:09:54
Replicating a Vision Transformer - High Level Overview
• Урок 226. 00:11:13
Breaking Down Figure 1 of the ViT Paper
• Урок 227. 00:10:56
Breaking Down the Four Equations Overview and a Trick for Reading Papers
• Урок 228. 00:08:15
Breaking Down Equation 1
• Урок 229. 00:10:04
Breaking Down Equations 2 and 3
• Урок 230. 00:07:28
Breaking Down Equation 4
• Урок 231. 00:11:06
Breaking Down Table 1
• Урок 232. 00:15:42
Calculating the Input and Output Shape of the Embedding Layer by Hand
• Урок 233. 00:15:04
Turning a Single Image into Patches (Part 1: Patching the Top Row)
• Урок 234. 00:12:34
Turning a Single Image into Patches (Part 2: Patching the Entire Image)
• Урок 235. 00:13:34
Creating Patch Embeddings with a Convolutional Layer
• Урок 236. 00:12:55
Exploring the Outputs of Our Convolutional Patch Embedding Layer
• Урок 237. 00:10:00
Flattening Our Convolutional Feature Maps into a Sequence of Patch Embeddings
• Урок 238. 00:05:04
Visualizing a Single Sequence Vector of Patch Embeddings
• Урок 239. 00:17:02
Creating the Patch Embedding Layer with PyTorch
• Урок 240. 00:13:25
Creating the Class Token Embedding
• Урок 241. 00:13:25
Creating the Class Token Embedding - Less Birds
• Урок 242. 00:11:26
Creating the Position Embedding
• Урок 243. 00:13:26
Equation 1: Putting it All Together
• Урок 244. 00:14:31
• Урок 245. 00:09:04
Equation 2: Layernorm Overview
• Урок 246. 00:14:34
Turning Equation 2 into Code
• Урок 247. 00:05:41
Checking the Inputs and Outputs of Equation
• Урок 248. 00:09:12
Equation 3: Replication Overview
• Урок 249. 00:11:26
Turning Equation 3 into Code
• Урок 250. 00:08:51
Transformer Encoder Overview
• Урок 251. 00:09:17
Combining Equation 2 and 3 to Create the Transformer Encoder
• Урок 252. 00:15:55
Creating a Transformer Encoder Layer with In-Built PyTorch Layer
• Урок 253. 00:18:20
Bringing Our Own Vision Transformer to Life - Part 1: Gathering the Pieces of the Puzzle
• Урок 254. 00:10:42
Bringing Our Own Vision Transformer to Life - Part 2: Putting Together the Forward Method
• Урок 255. 00:07:14
Getting a Visual Summary of Our Custom Vision Transformer
• Урок 256. 00:11:27
Creating a Loss Function and Optimizer from the ViT Paper
• Урок 257. 00:04:30
Training our Custom ViT on Food Vision Mini
• Урок 258. 00:09:09
Discussing what Our Training Setup Is Missing
• Урок 259. 00:06:14
Plotting a Loss Curve for Our ViT Model
• Урок 260. 00:14:38
Getting a Pretrained Vision Transformer from Torchvision and Setting it Up
• Урок 261. 00:05:54
Preparing Data to Be Used with a Pretrained ViT
• Урок 262. 00:07:16
Training a Pretrained ViT Feature Extractor Model for Food Vision Mini
• Урок 263. 00:05:14
Saving Our Pretrained ViT Model to File and Inspecting Its Size
• Урок 264. 00:03:47
Discussing the Trade-Offs Between Using a Larger Model for Deployments
• Урок 265. 00:03:31
Making Predictions on a Custom Image with Our Pretrained ViT
• Урок 266. 00:06:51
PyTorch Paper Replicating: Main Takeaways, Exercises and Extra-Curriculum
• Урок 267. 00:09:36
What is Machine Learning Model Deployment and Why Deploy a Machine Learning Model
• Урок 268. 00:07:14
Three Questions to Ask for Machine Learning Model Deployment
• Урок 269. 00:13:35
Where Is My Model Going to Go?
• Урок 270. 00:08:00
How Is My Model Going to Function?
• Урок 271. 00:05:50
Some Tools and Places to Deploy Machine Learning Models
• Урок 272. 00:04:02
What We Are Going to Cover
• Урок 273. 00:06:16
Getting Setup to Code
• Урок 274. 00:03:24
• Урок 275. 00:08:00
Outlining Our Food Vision Mini Deployment Goals and Modelling Experiments
• Урок 276. 00:09:46
Creating an EffNetB2 Feature Extractor Model
• Урок 277. 00:06:30
Create a Function to Make an EffNetB2 Feature Extractor Model and Transforms
• Урок 278. 00:03:32
• Урок 279. 00:09:16
Training Our EffNetB2 Feature Extractor and Inspecting the Loss Curves
• Урок 280. 00:03:25
Saving Our EffNetB2 Model to File
• Урок 281. 00:05:52
Getting the Size of Our EffNetB2 Model in Megabytes
• Урок 282. 00:06:35
Collecting Important Statistics and Performance Metrics for Our EffNetB2 Model
• Урок 283. 00:07:52
Creating a Vision Transformer Feature Extractor Model
• Урок 284. 00:02:31
Creating DataLoaders for Our ViT Feature Extractor Model
• Урок 285. 00:06:20
Training Our ViT Feature Extractor Model and Inspecting Its Loss Curves
• Урок 286. 00:05:09
Saving Our ViT Feature Extractor and Inspecting Its Size
• Урок 287. 00:05:52
Collecting Stats About Our ViT Feature Extractor
• Урок 288. 00:11:16
Outlining the Steps for Making and Timing Predictions for Our Models
• Урок 289. 00:16:21
Creating a Function to Make and Time Predictions with Our Models
• Урок 290. 00:10:44
Making and Timing Predictions with EffNetB2
• Урок 291. 00:07:35
Making and Timing Predictions with ViT
• Урок 292. 00:11:32
Comparing EffNetB2 and ViT Model Statistics
• Урок 293. 00:15:55
Visualizing the Performance vs Speed Trade-off
• Урок 294. 00:08:40
• Урок 295. 00:08:50
• Урок 296. 00:09:52
Creating a Predict Function to Map Our Food Vision Mini Inputs to Outputs
• Урок 297. 00:05:27
Creating a List of Examples to Pass to Our Gradio Demo
• Урок 298. 00:12:13
Bringing Food Vision Mini to Life in a Live Web Application
• Урок 299. 00:06:27
Getting Ready to Deploy Our App Hugging Face Spaces Overview
• Урок 300. 00:08:12
Outlining the File Structure of Our Deployed App
• Урок 301. 00:04:12
Creating a Food Vision Mini Demo Directory to House Our App Files
• Урок 302. 00:09:14
Creating an Examples Directory with Example Food Vision Mini Images
• Урок 303. 00:07:43
Writing Code to Move Our Saved EffNetB2 Model File
• Урок 304. 00:04:02
Turning Our EffNetB2 Model Creation Function Into a Python Script
• Урок 305. 00:13:28
Turning Our Food Vision Mini Demo App Into a Python Script
• Урок 306. 00:04:12
Creating a Requirements File for Our Food Vision Mini App
• Урок 307. 00:11:31
• Урок 308. 00:13:37
• Урок 309. 00:07:45
Running Food Vision Mini on Hugging Face Spaces and Trying it Out
• Урок 310. 00:04:18
Food Vision Big Project Outline
• Урок 311. 00:09:39
Preparing an EffNetB2 Feature Extractor Model for Food Vision Big
• Урок 312. 00:07:46
• Урок 313. 00:13:37
Creating a Function to Split Our Food 101 Dataset into Smaller Portions
• Урок 314. 00:07:24
Turning Our Food 101 Datasets into DataLoaders
• Урок 315. 00:20:16
Training Food Vision Big: Our Biggest Model Yet!
• Урок 316. 00:05:49
Outlining the File Structure for Our Food Vision Big
• Урок 317. 00:03:34
• Урок 318. 00:06:57
Saving Food 101 Class Names to a Text File and Reading them Back In
• Урок 319. 00:02:21
Turning Our EffNetB2 Feature Extractor Creation Function into a Python Script
• Урок 320. 00:10:42
Creating an App Script for Our Food Vision Big Model Gradio Demo
• Урок 321. 00:03:46
• Урок 322. 00:13:35
Deploying Food Vision Big to Hugging Face Spaces
• Урок 323. 00:06:14
PyTorch Mode Deployment: Main Takeaways, Extra-Curriculum and Exercises
• Урок 324. 00:06:02
Introduction to PyTorch 2.0
• Урок 325. 00:01:22
What We Are Going to Cover and PyTorch 2 Reference Materials
• Урок 326. 00:04:20
Getting Started with PyTorch 2.0 in Google Colab
• Урок 327. 00:03:21
PyTorch 2.0 - 30 Second Intro
• Урок 328. 00:02:23
Getting Setup for PyTorch 2.0
• Урок 329. 00:06:50
Getting Info from Our GPUs and Seeing if They're Capable of Using PyTorch 2.0
• Урок 330. 00:09:41
Setting the Default Device in PyTorch 2.0
• Урок 331. 00:06:43
Discussing the Experiments We Are Going to Run for PyTorch 2.0
• Урок 332. 00:10:18
Creating a Function to Setup Our Model and Transforms
• Урок 333. 00:08:24
Discussing How to Get Better Relative Speedups for Training Models
• Урок 334. 00:07:16
Setting the Batch Size and Data Size Programmatically
• Урок 335. 00:09:54
Getting More Speedups with TensorFloat-32
• Урок 336. 00:07:01
• Урок 337. 00:07:39
• Урок 338. 00:04:59
Preparing Training and Testing Loops with Timing Steps
• Урок 339. 00:08:23
Experiment 1 - Single Run without Torch Compile
• Урок 340. 00:10:39
Experiment 2 - Single Run with Torch Compile
• Урок 341. 00:11:20
Comparing the Results of Experiments 1 and 2
• Урок 342. 00:04:40
Saving the Results of Experiments 1 and 2
• Урок 343. 00:12:42
Preparing Functions for Experiments 3 and 4
• Урок 344. 00:12:45
Experiment 3 - Training a Non-Compiled Model for Multiple Runs
• Урок 345. 00:09:58
Experiment 4 - Training a Compiled Model for Multiple Runs
• Урок 346. 00:05:24
Comparing the Results of Experiments 3 and 4
• Урок 347. 00:05:51