Data poisoning is an adversarial attack in which malicious actors intentionally alter or inject false data into the training dataset. This manipulation aims to corrupt the learning process, causing the AI model to behave unpredictably, produce incorrect outputs, or make biased decisions. Ensuring data integrity and implementing robust defenses are critical to safeguarding AI systems from such threats.