What is “Bagging” in machine learning?
Answer:
Explanation:
Bagging, short for Bootstrap Aggregating, is an ensemble learning technique where multiple versions of a model are trained on random subsets of the data, and their predictions are averaged to produce a final output. The goal is to reduce variance and improve accuracy.
Each model is trained on a bootstrap sample, which is a random subset of the original data drawn with replacement. The results of these models are then combined (often through voting or averaging) to make the final prediction. This approach is used to stabilize machine learning models and reduce overfitting.
Bagging is commonly used with decision trees, leading to algorithms like Random Forests, where a collection of decision trees work together to improve performance on classification and regression tasks.