Model stealing is an unethical attack where adversaries repeatedly query a proprietary AI model, analyzing its outputs to replicate its functionality. This process aims to create a surrogate model, potentially infringing on intellectual property and exposing vulnerabilities. Protecting models from such theft involves techniques like query rate limiting, model watermarking, and monitoring for suspicious activity.