Expectation-Maximization (EM) is an iterative algorithm used in machine learning to find maximum likelihood estimates when models involve hidden or unobserved data. It alternates between estimating the missing data (E-step) and optimizing model parameters (M-step), gradually improving the likelihood until convergence. EM is widely used for clustering, mixture models, and incomplete data problems.