What is “feature scaling” in machine learning?
Answer:
Explanation:
Feature scaling is a preprocessing technique used in machine learning to standardize the range of independent variables or features in a dataset. This is important because features with different scales can negatively impact the performance of many machine learning algorithms, especially those based on distance measures like K-nearest neighbors (KNN) or support vector machines (SVMs).
Two common methods for feature scaling are normalization and standardization. Normalization scales features to a range of [0, 1], while standardization rescales features to have a mean of 0 and a standard deviation of 1.
Feature scaling ensures that all features contribute equally to the model’s predictions, leading to faster convergence during training and better model performance.