What is “regularization” in machine learning?
a) A technique to prevent overfitting by adding a penalty to the model complexity
b) A technique to reduce the size of the training dataset
c) A method to boost the performance of decision trees
d) A process of validating models using test data
Answer:
a) A technique to prevent overfitting by adding a penalty to the model complexity
Explanation:
Regularization is a technique used in machine learning to prevent overfitting by adding a penalty to the complexity of the model. The goal is to keep the model simple and generalizable by discouraging it from fitting noise in the data.
There are two common types of regularization: L1 (Lasso) and L2 (Ridge). L1 regularization adds a penalty equal to the absolute value of the coefficients, while L2 regularization adds a penalty equal to the square of the coefficients. Both methods reduce the magnitude of the model’s weights, making the model less likely to overfit.
Regularization is essential when working with complex models like neural networks, as it helps to balance model performance between training data and unseen test data.