Systemic Ethical Dilemmas In Machine Learning: From Predictive Accuracy To Collective Fairness
DOI:
https://doi.org/10.64252/srd60613Keywords:
Algorithmic Fairness, AI Ethics, Computational Intelligence, Data Protection, Explainability, Transparency.Abstract
Machine learning (ML) systems generate significant societal benefits but also pose ethical challenges when fundamental values conflict. This paper investigates ethical tensions in ML processes, focusing on dilemmas of Artificial Intelligence (AI) such as accuracy versus fairness, privacy versus transparency, and personalization versus solidarity. Drawing on international guidelines, including the EU Ethics Guidelines for Trustworthy AI and the EU AI Act, we analyze how these conflicts manifest across domains such as healthcare, finance, and generative AI. A qualitative study among 18 ML experts highlights practitioners’ views on fairness, bias, and accountability, revealing that ethical concerns are perceived as systemic rather than isolated issues. Findings suggest that some dilemmas represent unavoidable trade-offs, while others may be mitigated through innovation or governance. We argue that embedding ethical reflection into ML development, supported by regulatory frameworks and participatory deliberation, is essential for ensuring trustworthy AI. By combining conceptual analysis with empirical evidence, the paper contributes to ongoing debates in computational intelligence, emphasizing the importance of aligning ML systems with human values and societal goals.




