'%3E%0A%3Cg clip-path='url(%23c0)' opacity='0.5'%3E%0A%3Cpath d='M73.6 87.3H835.8' class='g0'/%3E%0A%3C/g%3E%0A%3C/g%3E%0A%3Cimage preserveAspectRatio='none' x='728' y='1205' width='108' height='30' href='data:image/png%3Bbase64%2CiVBORw0KGgoAAAANSUhEUgAAAGwAAAAeCAMAAAD%2BUgs1AAADAFBMVEUAAAAAAADvhwD3lgAAAAAAAAAAAAAAAAAAAAAAAAD2kwDzkQD/gAD5lgD4lgDygAD3lAD1kwD3lAD2lADwkwD/mQD4lQD0jQDvkgD/qgD1jwD5lQD/nwD6kgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABzhlnbAAABAHRSTlMAjiCDL9Bw%2BrJO/04CfrUUYk4%2B%2BSEM8S8xAxmDCC8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEihBkQAAAklJREFUeNrtVmt32zAIFRtCkD6Xruvabv//b07oZXBsOe3px6H4xJYRl4sAOUAXktAFATCE9SOlIdG8RwIJTrK%2BLG85qn6dSEa6Ccm3ZnFGofwHVrXMVHMpsQczy5c1jCuwFLFr0yWYjCAwGzT0njkwLjBFPSVUsFjXIA20bTAXqdgdmzGT4ZPe8QJWLVSt7TB%2BO59vyjifH1RFDpmx4R91gQHTdWmf2fdTkad83b%2BGoTJhFkvsxu6xA1NXZBfs/enUx/1t9hQOmTkiJJ6ZwstuGN9%2Bd3l59mGcMPNVsQN2kCDiEmSPGYz83gJrQd5JfR4SNXGPmWlWJCDcBKP2cE1RjzqZ1ZnEqsgN0IBRL4ttZpQZcbn0F2NbOe0gSLF7JhVMcqIIlWkO1xX1SOY5s9Y7KyCs2hXMeuPzTytvWigayCmzDihcrJstYAwzsLtfTh5VDY%2BZmZZcwog6TI6unOWao%2B8nJ3cL2A4zFEGb9smn/qoEfNP9e/Noxo8/6uyUmTg34h4YWDXYispia8IsmQ6Cu8zUjbaDCKnRFAI7uM3Pu37PYymVsg0m5dTTczIN9y6Kus6ry3EIuQSJ3UysPWAbrHYaU46alHnv7Fi6/qp4FjAE32%2BYNzdEW2FxhyXMBXl0lnxRmYnLVkG8zkxWDV8h%2BEV2/sssyO3Aw/6ZXNuO9C/d/j%2B%2BnFsxtVML5SNgvaflVdVas9Km81N1Qt9bMIDPgDXP8/cQObBGrN8hsGVGVF1D%2BhjYCCOFS2Z5ulNodtv3TT6C7S5cyD8s/CECFdYrQwAAAABJRU5ErkJggg=='/%3E%0A%3Cpath d='M256.4 1040.3H440.3m-336.4 19.1h56m214.6 95.5h65.8M103.9 1174h158M498 203.8H835.8M498 222.9H774.6m18.8 76.4h42.4M498 318.4H835.8M498 337.5H677.9m115.5 57.3h42.4M498 413.9H835.8M498 433H829.5M653.6 509.4H835.8M498 528.5H612.6m79.2 57.2h144M498 604.8h98.9M567 700.3H835.8M498 719.4H835.8M498 738.5h35.7m148.9 57.3H835.8M498 814.9H813.8m-20.4 38.2h42.4M498 872.2h79.9m14.3 57.3H817.7M691.8 1082.3h144M498 1101.4H618m175.4 57.3h42.4M498 1177.8H727.8' class='g1'/%3E%0A%3C/svg%3E)
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 24
for incorporating public input. Leveraging these tools will
strengthen democratic legitimacy, foster trust, and help en-
sure that AI technologies serve the public interest rather than
entrenching new forms of exclusion.
Evidence suggests that the use of artificial intelligence
in public administration tends to perpetuate and exacerbate
structural inequalities. When decision-making is delegated
to algorithmic systems without adequate controls, institu-
tional safeguards, and fundamental rights are undermined.
To mitigate these impacts, it is essential to strengthen ex-
isting regulatory frameworks, promote transparency, ensure
meaningful human oversight, establish effective redress
mechanisms, and guarantee active citizen participation.
Only through these measures can algorithmic governance be
steered toward a more democratic, fair, and inclusive model
of public administration.
Conclusions
This study demonstrates that the application of artificial
intelligence in public administration does not guarantee effi-
ciency, objectivity, or neutrality, and without robust legal
and ethical safeguards, it can exacerbate inequalities affec-
ting vulnerable groups. A review of empirical, regulatory,
and academic sources reveals that algorithmic systems may
reproduce racial, ethnic, socioeconomic, and gender biases,
undermining equality before the law, transparency, and due
process, mainly when automation bias and overreliance on
perceived infallibility occur. The lack of robust national and
international regulation limits the prevention and redress of
harms, making it necessary to combine technical rules with
citizen participation mechanisms, independent audits, al-
gorithmic impact assessments, and ethical design controls.
Frameworks such as the UNESCO Recommendation on the
Ethics of AI, the Council of Europe’s AI Convention, the EU
AI Act, and the GDPR provide guidance, but require adap-
tation to ensure governance models that are rooted in human
rights, inclusion, transparency, accountability, and public
oversight.
References
Alon-Barkat, S., & Busuioc, M. (2023). Human–AI inter-
actions in public sector decision making: “Automation
bias” and “selective adherence” to algorithmic advice.
Journal of Public Administration Research and The-
ory, 33(1), 153–169. https://doi.org/10.1093/jopart/
muac007
Alon-Barkat, S., & Busuioc, M. (2024). Public adminis-
tration meets artificial intelligence: Towards a mean-
ingful behavioral research agenda on algorithmic
decision-making in government. Journal of Behav-
ioral Public Administration, 7(1), 1–19. https://doi.
org/10.30636/jbpa.71.261
Amnesty International. (2021, 14 de octubre). Xenophobic
machines: Discrimination through unregulated use of
algorithms in the Dutch childcare benefits scandal.
https://www.amnesty.org/en/latest/news/2021/10/xeno-
phobic-machines-dutch-child-benefit-scandal
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Ma-
chine bias: Software is used across the country to predict
future criminal behavior. Moreover, it is not very objec-
tive against blacks [Investigación]. ProPublica. https://
www.propublica.org/article/machine-bias-risk-assess-
ments-in-criminal-sentencing
Asamblea Nacional del Ecuador. (2024). Proyecto de Ley
Orgánica de Regulación y Promoción de la Inteligencia
Artificial en Ecuador (Expediente N.º 450889). https://
www.asambleanacional.gob.ec/es/multimedios-legis-
lativos/97303-proyecto-de-ley-organica-de-regulacion
Baraniuk, C., & Wang, J. (2024). Strengthening legal pro-
tection against discrimination by algorithms: A critical
review of current European norms. Human Rights Law
Review, 24(2), 345–372. https://doi.org/10.1080/13642
987.2020.1743976
Coitinho, D., & Olivier da Silva, A. L. (2024). Algorith-
mic injustice and human rights. Unisinos Journal of
Philosophy, 25(1), e25109. https://doi.org/10.4013/
fsu.2024.251.09
Council of Europe. (2024). Framework Convention on Ar-
tificial Intelligence and Human Rights, Democracy
and the Rule of Law (CETS No. 225). Adoptada el 17
de mayo de 2024 y abierta a firma el 5 de septiembre
de 2024. https://www.coe.int/en/web/artificial-intelli-
gence/the-framework-convention-on-artificial-intelli-
gence
European Commission. (2021). Proposal for a regulation
laying down harmonised rules on artificial intelligence
(Artificial Intelligence Act). https://eur-lex.europa.eu/
legal-content/EN/TXT/?uri=CELEX:52021PC0206
European Data Protection Board. (2016). General Data
Protection Regulation (GDPR) RGPD – UE. https://
gdpr-info.eu/
Falletti, E. (2023). Algorithmic discrimination and privacy
protection. Journal of Digital Technologies and Law,
1(2), 387–420. https://doi.org/10.21202/jdtl.2023.16
Falletti, T. (2023). Algorithmic governance and regulatory
gaps: A legal analysis. Universidad Complutense Jour-
nal of Law and Technology, 17(3), 201–220.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand,
P., Dignum, V., … & Vayena, E. (2018). AI4People—
An ethical framework for a good AI society: Opportu-
nities, risks, principles, and recommendations. Minds
and Machines, 28, 689–707. https://doi.org/10.1007/
s11023-018-9482-5
Jones, L., & Dewar, R. (2023). Ethics and discrimination in
AI enabled recruitment: Technical and managerial solu-
tions. Palgrave Communications, 9, Article 112. https://
doi.org/10.1057/s41599-023-02079-x