From efficiency to exclusion rethinking AI governance in public institutions
DOI:
https://doi.org/10.5281/zenodo.15958621Keywords:
algorithmic discrimination, public administration, human rights, artificial intelligence, governanceAbstract
The use of artificial intelligence (AI) in public administration has been promoted as a means to increase efficiency and reduce human bias. However, recent studies reveal that these systems can reproduce and even amplify structural inequalities, thereby undermining fundamental human rights. This article offers a critical analysis of how algorithmic decision-making impacts equality, privacy, and human dignity, based on an interdisciplinary documentary review of normative frameworks, empirical studies, and real-world cases such as the COMPAS algorithm in the United States and the child welfare fraud detection system in the Netherlands. Three critical dimensions are identified: the algorithmic reproduction of historical prejudice, the prevalence of automation bias and selective adherence by public officials, and the inadequacy of current regulatory frameworks such as the European Union’s Artificial Intelligence Act (AI Act) and the General Data Protection Regulation (GDPR). Drawing on the concept of “slow violence,” the study argues that these technologies can imperceptibly erode fundamental rights, particularly among vulnerable populations. The findings support the need to move toward a model of algorithmic governance centered on human rights, incorporating principles of transparency, accountability, public oversight, and access to effective redress mechanisms. Only through comprehensive and enforceable regulation can the risk of a new form of algorithmic exclusion in the public sector be effectively mitigated.
Downloads
References
Alon-Barkat, S., & Busuioc, M. (2023). Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153–169. https://doi.org/10.1093/jopart/muac007
Alon-Barkat, S., & Busuioc, M. (2024). Public administration meets artificial intelligence: Towards a meaningful behavioral research agenda on algorithmic decision-making in government. Journal of Behavioral Public Administration, 7(1), 1–19. https://doi.org/10.30636/jbpa.71.261
Amnesty International. (2021, 14 de octubre). Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal. https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: Software is used across the country to predict future criminal behavior. Moreover, it is not very objective against blacks [Investigación]. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Asamblea Nacional del Ecuador. (2024). Proyecto de Ley Orgánica de Regulación y Promoción de la Inteligencia Artificial en Ecuador (Expediente N.º 450889). https://www.asambleanacional.gob.ec/es/multimedios-legislativos/97303-proyecto-de-ley-organica-de-regulacion
Baraniuk, C., & Wang, J. (2024). Strengthening legal protection against discrimination by algorithms: A critical review of current European norms. Human Rights Law Review, 24(2), 345–372. https://doi.org/10.1080/13642987.2020.1743976
Coitinho, D., & Olivier da Silva, A. L. (2024). Algorithmic injustice and human rights. Unisinos Journal of Philosophy, 25(1), e25109. https://doi.org/10.4013/fsu.2024.251.09
Council of Europe. (2024). Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225). Adoptada el 17 de mayo de 2024 y abierta a firma el 5 de septiembre de 2024. https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
European Data Protection Board. (2016). General Data Protection Regulation (GDPR) RGPD – UE. https://gdpr-info.eu/
Falletti, E. (2023). Algorithmic discrimination and privacy protection. Journal of Digital Technologies and Law, 1(2), 387–420. https://doi.org/10.21202/jdtl.2023.16
Falletti, T. (2023). Algorithmic governance and regulatory gaps: A legal analysis. Universidad Complutense Journal of Law and Technology, 17(3), 201–220.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People— An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
Jones, L., & Dewar, R. (2023). Ethics and discrimination in AI enabled recruitment: Technical and managerial solutions. Palgrave Communications, 9, Article 112. https://doi.org/10.1057/s41599-023-02079-x
Kim, S., & Lee, H. (2024). Algorithmic discrimination: Examining its types and regulatory challenges. Frontiers in Artificial Intelligence, 7, Article 1320277. https://doi.org/10.3389/frai.2024.1320277
Lendvai, G. F., & Gosztonyi, G. (2025). Algorithmic bias as a core legal dilemma in the age of artificial intelligence: Conceptual basis and the current state of regulation. Laws, 14(3), 41. https://doi.org/10.3390/laws14030041
Müller, F., & Schmidt, T. (2024). Automation bias in public administration: An interdisciplinary perspective from law and psychology. Government Information Quarterly, 41, 101797. https://doi.org/10.1016/j.giq.2022.101797
Nixon, R. (2011). Slow violence and the environmentalism of the poor. Harvard University Press. https://www.hup.harvard.edu/books/9780674072343
Patel, R., & González, M. (2023). Bias and discrimination in machine learning–based administrative decision-making systems. Policy & Internet, 15(4), 789–808. https://doi.org/10.1016/S0267-3649(24)00136-5
Santos, F. A., & Palhares, D. (2023). Artificial intelligence and human rights: Brazilian perspectives on regulation and fairness. Humanities and Social Sciences Communications, 10, Article 112. https://doi.org/10.1057/s41599-023-02079-x
Teo, S. A. (2024). Artificial intelligence and its ‘slow violence’ to human rights. AI and Ethics, 5, 2265–2280. https://doi.org/10.1007/s43681-024-00547-x
Tissera, M. G., & Recalde, M. (2022). La gobernanza de la inteligencia artificial: Desafíos éticos y jurídicos para América Latina. Revista Iberoamericana de Ciencia, Tecnología y Sociedad, 17(51), 185–208. https://doi.org/10.22430/22565337.185
UNESCO. (2023). Recomendación sobre la ética de la inteligencia artificial. https://www.unesco.org/es/articles/recomendacion-sobre-la-etica-de-la-inteligencia-artificial
Véliz, C. (2023). Algorithmic fairness: Lessons from political philosophy. Frontiers in Artificial Intelligence, 7, Article 1320277. https://doi.org/10.3389/frai.2023.1320277
Published
Data Availability Statement
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Issue
Section
License
Copyright (c) 2025 Vielka M. Párraga (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.