A regularization method that adds the squared magnitude of weights as a penalty term to the loss function, preventing large weight values.
Detailed Explanation
L2 Regularization, also known as Ridge Regression, appends the squared weights to the loss function to penalize large model weights, encouraging simpler models that generalize better. It helps prevent overfitting by shrinking weight values toward zero, balancing model complexity and accuracy, leading to improved performance on unseen data.
Use Cases
•Reducing overfitting in neural networks by penalizing large weights during training to improve generalization on new data.