-
Урок 1.
00:05:20
Course Introduction (What We're Building)
-
Урок 2.
00:04:31
Signing in to AWS
-
Урок 3.
00:05:30
Creating an IAM User
-
Урок 4.
00:03:13
Using our new IAM User
-
Урок 5.
00:01:31
What To Do In Case You Get Hacked!
-
Урок 6.
00:02:29
Creating a SageMaker Domain
-
Урок 7.
00:04:54
Logging in to our SageMaker Environment
-
Урок 8.
00:07:38
Introduction to JupyterLab
-
Урок 9.
00:07:51
Sagemaker Sessions, Regions, and IAM Roles
-
Урок 10.
00:13:30
Examining Our Dataset from HuggingFace
-
Урок 11.
00:09:09
Tokenization and Word Embeddings
-
Урок 12.
00:04:22
HuggingFace Authentication with Sagemaker
-
Урок 13.
00:08:44
Applying the Templating Function to our Dataset
-
Урок 14.
00:15:56
Attention Masks and Padding
-
Урок 15.
00:04:04
Star Unpacking with Python
-
Урок 16.
00:10:23
Chain Iterator, List Constructor and Attention Mask example with Python
-
Урок 17.
00:08:12
Understanding Batching
-
Урок 18.
00:07:32
Slicing and Chunking our Dataset
-
Урок 19.
00:16:07
Creating our Custom Chunking Function
-
Урок 20.
00:09:31
Tokenizing our Dataset
-
Урок 21.
00:04:31
Running our Chunking Function
-
Урок 22.
00:08:33
Understanding the Entire Chunking Process
-
Урок 23.
00:05:54
Uploading the Training Data to AWS S3
-
Урок 24.
00:06:48
Setting Up Hyperparameters for the Training Job
-
Урок 25.
00:06:46
Creating our HuggingFace Estimator in Sagemaker
-
Урок 26.
00:08:12
Introduction to Low-rank adaptation (LoRA)
-
Урок 27.
00:10:56
LoRA Numerical Example
-
Урок 28.
00:09:09
LoRA Summarization and Cost Saving Calculation
-
Урок 29.
00:04:46
(Optional) Matrix Multiplication Refresher
-
Урок 30.
00:12:33
Understanding LoRA Programatically Part 1
-
Урок 31.
00:05:49
Understanding LoRA Programatically Part 2
-
Урок 32.
00:08:11
Bfloat16 vs Float32
-
Урок 33.
00:06:33
Comparing Bfloat16 Vs Float32 Programatically
-
Урок 34.
00:07:20
Setting up Imports and Libraries for the Train Script
-
Урок 35.
00:07:57
Argument Parsing Function Part 1
-
Урок 36.
00:10:55
Argument Parsing Function Part 2
-
Урок 37.
00:14:31
Understanding Trainable Parameters Caveats
-
Урок 38.
00:07:36
Introduction to Quantization
-
Урок 39.
00:07:20
Identifying Trainable Layers for LoRA
-
Урок 40.
00:04:36
Setting up Parameter Efficient Fine Tuning
-
Урок 41.
00:10:35
Implement LoRA Configuration and Mixed Precision Training
-
Урок 42.
00:04:22
Understanding Double Quantization
-
Урок 43.
00:14:15
Creating the Training Function Part 1
-
Урок 44.
00:07:17
Creating the Training Function Part 2
-
Урок 45.
00:02:57
Exercise: Imposter Syndrome
-
Урок 46.
00:05:09
Finishing our Sagemaker Script
-
Урок 47.
00:05:11
Gaining Access to Powerful GPUs with AWS Quotas
-
Урок 48.
00:03:55
Final Fixes Before Training
-
Урок 49.
00:07:16
Starting our Training Job
-
Урок 50.
00:11:24
Inspecting the Results of our Training Job and Monitoring with Cloudwatch
-
Урок 51.
00:17:58
Deploying our LLM to a Sagemaker Endpoint
-
Урок 52.
00:08:19
Testing our LLM in Sagemaker Locally
-
Урок 53.
00:08:56
Creating the Lambda Function to Invoke our Endpoint
-
Урок 54.
00:02:37
Creating API Gateway to Deploy the Model Through the Internet
-
Урок 55.
00:05:12
Implementing our Streamlit App
-
Урок 56.
00:03:27
Streamlit App Correction
-
Урок 57.
00:02:39
Congratulations and Cleaning up AWS Resources
-
Урок 58.
00:01:18
Thank You!
Found a really good course