Article
-
Chapter 7
[AI from Scratch] Episode 185: Details of Generative Adversarial Networks (GAN)
Recap: Variational Autoencoder (VAE) In the previous episode, we explored Variational Autoencoders (VAE), a probabilistic generative model. VAEs compress data into a latent space and can generate new data based on this compressed represe... -
Chapter 7
[AI from Scratch] Episode 184: Details of Variational Autoencoders (VAE)
Recap: Mechanism of Autoencoders In the previous episode, we explained Autoencoders in detail. Autoencoders compress (encode) data and reconstruct (decode) it from the compressed representation. This process is useful for tasks like dime... -
Chapter 7
[AI from Scratch] Episode 183: Details of Autoencoders — Understanding Encoding and Decoding Data
Recap and Introduction Hello! In the previous episode, we discussed Autoregressive Models, which are a type of generative model used to predict the next value based on time-series data. They generate the next step based on the current da... -
Chapter 7
[AI from Scratch] Episode 181: What Are Generative Models?
Recap: Chapter 6 Summary In the previous episode, we reviewed our understanding of model interpretability, highlighting the importance of interpreting models using methods such as SHAP values and LIME. These techniques make it easier to ... -
Chapter 7
[AI from Scratch] Episode 182: Autoregressive Models
Recap: Generative Models In the previous episode, we covered the fundamental concepts of generative models. Generative models create new data based on training data and have diverse applications, such as image generation and text generat... -
Chapter 6
[AI from Scratch] Episode 180: Chapter 6 Summary and Comprehension Check
Recap: Enhancing Model Interpretability In the previous episode, we explained how to interpret model predictions using SHAP values (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). These techniqu... -
Chapter 6
[AI from Scratch] Episode 179: Enhancing Model Interpretability
Recap: Knowledge Distillation In the previous episode, we explained Knowledge Distillation, a technique that allows the transfer of knowledge from a large model to a smaller one. This method helps reduce model size while maintaining perf... -
Chapter 6
[AI from Scratch] Episode 178: Knowledge Distillation
Recap: Model Optimization for Lightweight and Fast Inference In the previous article, we discussed techniques for optimizing models to enhance their inference speed and reduce their size. Specifically, we focused on methods such as model... -
Chapter 6
Lesson 176: Stacking
Recap: Improving Performance with Ensemble Learning In the previous lesson, we discussed Ensemble Learning, a method that combines multiple models to achieve higher accuracy than individual models alone. We introduced three major techniq... -
Chapter 6
Lesson 175: Improving Performance with Ensemble Learning
Recap: Batch Normalization In the previous lesson, we discussed Batch Normalization, a technique that stabilizes the data distribution across layers in a neural network, improving learning stability and accelerating convergence. Batch No... -
Chapter 6
Lesson 174: Revisiting Batch Normalization
Recap: Dropout In the previous lesson, we detailed Dropout, a technique that deactivates some neurons randomly during training to prevent overfitting in neural networks. By ensuring that the model does not rely on specific neurons, Dropo... -
Chapter 6
Lesson 173: Details of Dropout
Recap: Regularization In the previous lesson, we discussed the importance of Regularization techniques, such as L1 and L2 Regularization, which control model complexity and prevent overfitting. These methods help models avoid fitting the... -
Chapter 6
Lesson 172: Revisiting Regularization
Recap: Learning Rate Scheduling In the previous lesson, we discussed Learning Rate Scheduling, a technique that dynamically adjusts the learning rate to facilitate efficient learning. By starting with a larger learning rate for quick ini... -
Chapter 6
Lesson 171: Learning Rate Scheduling
Recap: Early Stopping In the previous lesson, we discussed Early Stopping, a technique to prevent overfitting by stopping training when validation error starts to increase. This method allows for improved generalization performance while... -
Chapter 6
Lesson 177: Model Compression and Acceleration
Recap: Stacking In the previous lesson, we explored Stacking, an ensemble learning technique that combines different types of models using a meta model to achieve optimal predictions. This method allows for greater accuracy and improved ...
