Bias in AI refers to systematic errors that cause AI systems to produce unfair, prejudiced, or discriminatory outcomes against specific groups or individuals. These errors often stem from biased training data, algorithms, or design choices, leading to ethical concerns about fairness, accountability, and social impact. Addressing AI bias is essential for developing equitable and trustworthy artificial intelligence systems.