AI Guardrails are safety mechanisms designed to restrict and monitor an AI system's actions, ensuring its behavior remains aligned with ethical standards and societal norms. They prevent unintended or harmful outcomes by constraining decision-making processes, maintaining trust, and ensuring AI operates safely within predefined boundaries, especially in sensitive applications such as healthcare, finance, and autonomous systems.