MDPI's comprehensive academic survey examines AI bias across healthcare, employment, criminal justice, and credit scoring, identifying data bias, algorithmic bias, and user bias as primary sources of discriminatory outcomes. The research emphasizes how machine learning models can learn and replicate societal biases from training data, leading to unfair treatment of marginalized groups in critical decision-making contexts. This peer-reviewed scientific analysis provides essential insights for understanding bias mitigation strategies and highlights the urgent need for fairness considerations in AI system design, particularly as generative AI models increasingly influence representation in synthetic media and automated decisions.