-
Урок 1.
00:21:10
1.1. Python Environment Setup Video
-
Урок 2.
00:06:28
1.2. Foundations to Build a Large Language Model (From Scratch)
-
Урок 3.
01:07:40
2.1. Prerequisites to Chapter 2 (1
-
Урок 4.
00:14:10
2.2. Tokenizing text
-
Урок 5.
00:09:59
2.3. Converting tokens into token IDs
-
Урок 6.
00:06:36
2.4. Adding special context tokens
-
Урок 7.
00:13:40
2.5. Byte pair encoding
-
Урок 8.
00:23:16
2.6. Data sampling with a sliding window
-
Урок 9.
00:08:37
2.7. Creating token embeddings
-
Урок 10.
00:12:23
2.8. Encoding word positions
-
Урок 11.
01:14:17
3.1. Prerequisites to Chapter 3 (1
-
Урок 12.
00:41:10
3.2. A simple self-attention mechanism without trainable weights | Part 1
-
Урок 13.
00:11:43
3.3. A simple self-attention mechanism without trainable weights | Part 2
-
Урок 14.
00:20:00
3.4. Computing the attention weights step by step
-
Урок 15.
00:08:31
3.5. Implementing a compact self-attention Python class
-
Урок 16.
00:11:37
3.6. Applying a causal attention mask
-
Урок 17.
00:05:38
3.7. Masking additional attention weights with dropout
-
Урок 18.
00:08:53
3.8. Implementing a compact causal self-attention class
-
Урок 19.
00:12:05
3.9. Stacking multiple single-head attention layers
-
Урок 20.
00:16:47
3.10. Implementing multi-head attention with weight splits
-
Урок 21.
01:11:23
4.1. Prerequisites to Chapter 4 (1
-
Урок 22.
00:14:00
4.2. Coding an LLM architecture
-
Урок 23.
00:22:14
4.3. Normalizing activations with layer normalization
-
Урок 24.
00:16:19
4.4. Implementing a feed forward network with GELU activations
-
Урок 25.
00:10:52
4.5. Adding shortcut connections
-
Урок 26.
00:12:14
4.6. Connecting attention and linear layers in a transformer block
-
Урок 27.
00:12:45
4.7. Coding the GPT model
-
Урок 28.
00:17:47
4.8. Generating text
-
Урок 29.
00:23:58
5.1. Prerequisites to Chapter 5
-
Урок 30.
00:17:32
5.2. Using GPT to generate text
-
Урок 31.
00:27:14
5.3. Calculating the text generation loss: cross entropy and perplexity
-
Урок 32.
00:24:52
5.4. Calculating the training and validation set losses
-
Урок 33.
00:27:04
5.5. Training an LLM
-
Урок 34.
00:03:37
5.6. Decoding strategies to control randomness
-
Урок 35.
00:13:43
5.7. Temperature scaling
-
Урок 36.
00:08:20
5.8. Top-k sampling
-
Урок 37.
00:10:51
5.9. Modifying the text generation function
-
Урок 38.
00:04:24
5.10. Loading and saving model weights in PyTorch
-
Урок 39.
00:20:04
5.11. Loading pretrained weights from OpenAI
-
Урок 40.
00:39:21
6.1. Prerequisites to Chapter 6
-
Урок 41.
00:26:58
6.2. Preparing the dataset
-
Урок 42.
00:16:08
6.3. Creating data loaders
-
Урок 43.
00:10:11
6.4. Initializing a model with pretrained weights
-
Урок 44.
00:15:38
6.5. Adding a classification head
-
Урок 45.
00:22:32
6.6. Calculating the classification loss and accuracy
-
Урок 46.
00:33:36
6.7. Fine-tuning the model on supervised data
-
Урок 47.
00:11:07
6.8. Using the LLM as a spam classifier
-
Урок 48.
00:15:48
7.1. Preparing a dataset for supervised instruction fine-tuning
-
Урок 49.
00:23:45
7.2. Organizing data into training batches
-
Урок 50.
00:07:31
7.3. Creating data loaders for an instruction dataset
-
Урок 51.
00:07:48
7.4. Loading a pretrained LLM
-
Урок 52.
00:20:02
7.5. Fine-tuning the LLM on instruction data
-
Урок 53.
00:09:40
7.6. Extracting and saving responses
-
Урок 54.
00:21:57
7.7. Evaluating the fine-tuned LLM