Этот материал находится в платной подписке. Оформи премиум подписку и смотри или слушай AI Engineering: Customizing LLMs for Business (Fine-Tuning LLMs with QLoRA & AWS), а также все другие курсы, прямо сейчас!
Премиум
  1. Урок 1. 00:05:20
    Course Introduction (What We're Building)
  2. Урок 2. 00:04:31
    Signing in to AWS
  3. Урок 3. 00:05:30
    Creating an IAM User
  4. Урок 4. 00:03:13
    Using our new IAM User
  5. Урок 5. 00:01:31
    What To Do In Case You Get Hacked!
  6. Урок 6. 00:02:29
    Creating a SageMaker Domain
  7. Урок 7. 00:04:54
    Logging in to our SageMaker Environment
  8. Урок 8. 00:07:38
    Introduction to JupyterLab
  9. Урок 9. 00:07:51
    Sagemaker Sessions, Regions, and IAM Roles
  10. Урок 10. 00:13:30
    Examining Our Dataset from HuggingFace
  11. Урок 11. 00:09:09
    Tokenization and Word Embeddings
  12. Урок 12. 00:04:22
    HuggingFace Authentication with Sagemaker
  13. Урок 13. 00:08:44
    Applying the Templating Function to our Dataset
  14. Урок 14. 00:15:56
    Attention Masks and Padding
  15. Урок 15. 00:04:04
    Star Unpacking with Python
  16. Урок 16. 00:10:23
    Chain Iterator, List Constructor and Attention Mask example with Python
  17. Урок 17. 00:08:12
    Understanding Batching
  18. Урок 18. 00:07:32
    Slicing and Chunking our Dataset
  19. Урок 19. 00:16:07
    Creating our Custom Chunking Function
  20. Урок 20. 00:09:31
    Tokenizing our Dataset
  21. Урок 21. 00:04:31
    Running our Chunking Function
  22. Урок 22. 00:08:33
    Understanding the Entire Chunking Process
  23. Урок 23. 00:05:54
    Uploading the Training Data to AWS S3
  24. Урок 24. 00:06:48
    Setting Up Hyperparameters for the Training Job
  25. Урок 25. 00:06:46
    Creating our HuggingFace Estimator in Sagemaker
  26. Урок 26. 00:08:12
    Introduction to Low-rank adaptation (LoRA)
  27. Урок 27. 00:10:56
    LoRA Numerical Example
  28. Урок 28. 00:09:09
    LoRA Summarization and Cost Saving Calculation
  29. Урок 29. 00:04:46
    (Optional) Matrix Multiplication Refresher
  30. Урок 30. 00:12:33
    Understanding LoRA Programatically Part 1
  31. Урок 31. 00:05:49
    Understanding LoRA Programatically Part 2
  32. Урок 32. 00:08:11
    Bfloat16 vs Float32
  33. Урок 33. 00:06:33
    Comparing Bfloat16 Vs Float32 Programatically
  34. Урок 34. 00:07:20
    Setting up Imports and Libraries for the Train Script
  35. Урок 35. 00:07:57
    Argument Parsing Function Part 1
  36. Урок 36. 00:10:55
    Argument Parsing Function Part 2
  37. Урок 37. 00:14:31
    Understanding Trainable Parameters Caveats
  38. Урок 38. 00:07:36
    Introduction to Quantization
  39. Урок 39. 00:07:20
    Identifying Trainable Layers for LoRA
  40. Урок 40. 00:04:36
    Setting up Parameter Efficient Fine Tuning
  41. Урок 41. 00:10:35
    Implement LoRA Configuration and Mixed Precision Training
  42. Урок 42. 00:04:22
    Understanding Double Quantization
  43. Урок 43. 00:14:15
    Creating the Training Function Part 1
  44. Урок 44. 00:07:17
    Creating the Training Function Part 2
  45. Урок 45. 00:02:57
    Exercise: Imposter Syndrome
  46. Урок 46. 00:05:09
    Finishing our Sagemaker Script
  47. Урок 47. 00:05:11
    Gaining Access to Powerful GPUs with AWS Quotas
  48. Урок 48. 00:03:55
    Final Fixes Before Training
  49. Урок 49. 00:07:16
    Starting our Training Job
  50. Урок 50. 00:11:24
    Inspecting the Results of our Training Job and Monitoring with Cloudwatch
  51. Урок 51. 00:17:58
    Deploying our LLM to a Sagemaker Endpoint
  52. Урок 52. 00:08:19
    Testing our LLM in Sagemaker Locally
  53. Урок 53. 00:08:56
    Creating the Lambda Function to Invoke our Endpoint
  54. Урок 54. 00:02:37
    Creating API Gateway to Deploy the Model Through the Internet
  55. Урок 55. 00:05:12
    Implementing our Streamlit App
  56. Урок 56. 00:03:27
    Streamlit App Correction
  57. Урок 57. 00:02:39
    Congratulations and Cleaning up AWS Resources
  58. Урок 58. 00:01:18
    Thank You!