モデルの評価とチューニング(151~180)– モデルの性能を正確に評価し、最適化するための手法を学びます。 –
-
Chapter 6
Lesson 165: What Are Hyperparameters?
Recap: Details of Cross-Validation In the previous lesson, we discussed Cross-Validation, a technique used to accurately evaluate a model’s generalization performance. We explored various methods, including K-Fold Cross-Validation, which... -
Chapter 6
Lesson 164: Details of Cross-Validation
Recap: Using Validation Sets In the previous lesson, we discussed using Validation Sets to evaluate a model’s generalization performance. Validation sets play a critical role in adjusting models to prevent overfitting and selecting the b... -
Chapter 6
Lesson 163: Using Validation Sets
Recap: Analyzing Learning Curves In the previous lesson, we explored how to use Learning Curves to visually evaluate a model’s training process. Learning curves, which plot training and validation errors, help identify signs of overfitti... -
Chapter 6
Lesson 162: Analyzing Learning Curves
Recap: Coefficient of Determination (R²) In the previous lesson, we covered the Coefficient of Determination (R²), a metric that measures how much of the variance in the data a regression model can explain. R² ranges from 0 to 1, with va... -
Chapter 6
Lesson 161: Coefficient of Determination (R²)
Recap: Mean Absolute Error (MAE) In the previous lesson, we discussed Mean Absolute Error (MAE), a metric that calculates the average absolute difference between predicted and actual values. MAE is useful when the impact of outliers need... -
Chapter 6
Lesson 156: F1 Score
Recap: Recall In the previous lesson, we discussed Recall, a metric that measures how well a model identifies actual positive instances within the dataset. Recall is crucial when minimizing False Negatives (FN) is vital, such as in medic... -
Chapter 6
Lesson 155: Recall
Recap: Precision In the previous lesson, we discussed Precision, which measures the proportion of instances that the model correctly classified as positive out of all instances predicted as positive. Precision is especially important whe... -
Chapter 6
Lesson 154: Precision
Recap: Accuracy In the previous lesson, we discussed Accuracy, a metric that shows how accurately a model predicts overall. Specifically, accuracy indicates the proportion of correctly predicted data out of the total dataset and serves a... -
Chapter 6
Lesson 153: Accuracy
Recap: What is a Confusion Matrix? In the previous lesson, we discussed the Confusion Matrix, a table that visually organizes how a classification model makes predictions and whether those predictions are correct. The confusion matrix sh... -
Chapter 6
Lesson 152: What is a Confusion Matrix?
Recap: Basic Concepts of Model Evaluation In the previous lesson, we learned why model evaluation is essential in machine learning and which metrics are used for evaluation. By understanding various metrics like accuracy, precision, reca... -
Chapter 6
Lesson 151: Basic Concepts of Model Evaluation
Recap: Summary and Knowledge Check of Chapter 5 In the previous lesson, we reviewed the entirety of Chapter 5, covering essential topics such as data preprocessing, model selection, and feature engineering. Today, we will focus on the ba... -
Chapter 6
Lesson 159: Mean Squared Error (MSE)
Recap: Precision-Recall Curve (PR Curve) In the previous lesson, we discussed the PR Curve (Precision-Recall Curve), a graph that illustrates the relationship between Precision and Recall. The PR curve is particularly useful for evaluati... -
Chapter 6
Lesson 158: Precision-Recall Curve (PR Curve)
Recap: ROC Curve and AUC In the previous lesson, we discussed the ROC Curve (Receiver Operating Characteristic curve) and AUC (Area Under the Curve). The ROC curve visually evaluates the performance of binary classification models by ill... -
Chapter 6
Lesson 157: ROC Curve and AUC
Recap: F1 Score In the previous lesson, we covered the F1 Score, which combines Precision and Recall through their harmonic mean. The F1 Score is essential for evaluating the balance between precision and recall, especially when there is... -
Chapter 6
Lesson 160: Mean Absolute Error (MAE)
Recap: Mean Squared Error (MSE) In the previous lesson, we covered Mean Squared Error (MSE), which calculates the average of the squared differences between predicted and actual values. MSE emphasizes larger errors, making it a useful me...
12
