De la eficiencia a la exclusión repensando la gobernanza de la IA en las instituciones públicas J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 https://doi.org/10.5281/zenodo.15958621 ISSN 3091-1575 ORIGINAL ARTICLE From efficiency to exclusion rethinking AI governance in public institutions Vielka M. Párraga vmparraga@sangregorio.edu.ec Universidad San Gregorio de Portoviejo, Manabí, Ecuador. Received: 25 February 2025 / Accepted: 13 May 2025 / Published online: 31 July 2025 © The Author(s) 2025 Vielka M. Párraga Abstract The use of artificial intelligence (AI) in public ad- ministration has been promoted as a means to increase efficiency and reduce human bias. However, recent studies reveal that these systems can reproduce and even amplify structural inequalities, thereby undermining fundamental human rights. This article of- fers a critical analysis of how algorithmic decision-making impacts equality, privacy, and human dignity, based on an interdisciplinary documentary review of normative frameworks, empirical studies, and real-world cases such as the COMPAS algorithm in the United States and the child welfare fraud detection system in the Neth- erlands. Three critical dimensions are identified: the algorithmic reproduction of historical prejudice, the prevalence of automation bias and selective adherence by public officials, and the inadequa- cy of current regulatory frameworks such as the European Union’s Artificial Intelligence Act (AI Act) and the General Data Protection Regulation (GDPR). Drawing on the concept of “slow violence,” the study argues that these technologies can imperceptibly erode fundamental rights, particularly among vulnerable populations. The findings support the need to move toward a model of algo- rithmic governance centered on human rights, incorporating prin- ciples of transparency, accountability, public oversight, and access to effective redress mechanisms. Only through comprehensive and enforceable regulation can the risk of a new form of algorithmic exclusion in the public sector be effectively mitigated. Keywords algorithmic discrimination, public administration, hu- man rights, artificial intelligence, governance. Resumen El uso de inteligencia artificial (IA) en la adminis- tración pública ha sido promovido como una vía para aumentar la eficiencia y reducir el sesgo humano. Sin embargo, investigaciones recientes revelan que estos sistemas pueden reproducir y amplificar desigualdades estructurales, vulnerando principios fundamentales de derechos humanos. Este artículo analiza críticamente cómo la automatización decisional afecta la igualdad, la privacidad y la dignidad, a partir de una revisión documental interdisciplinaria de marcos normativos, estudios empíricos y casos reales como el al- goritmo COMPAS en Estados Unidos y el sistema de detección de fraude en subsidios en Países Bajos. Se identifican tres dimensio- nes críticas: la reproducción algorítmica de prejuicios históricos, el automatismo decisional y la adherencia selectiva por parte de funcionarios públicos, y la insuficiencia de los marcos regulatorios actuales, como el Reglamento Europeo de IA (AI Act) y el RGPD. A partir del concepto de “violencia lenta”, se argumenta que es- tas tecnologías erosionan de manera imperceptible los derechos fundamentales, especialmente entre poblaciones vulnerables. El estudio concluye que es imprescindible avanzar hacia un modelo de gobernanza algorítmica centrado en los derechos humanos, que incluya principios de transparencia, auditabilidad, participación y acceso efectivo a mecanismos de impugnación. Solo a través de una regulación integral será posible evitar que la IA consolide nuevas formas de exclusión tecnificada en el ámbito estatal. Palabras clave discriminación algorítmica, administración públi- ca, derechos humanos, inteligencia artificial, gobernanza. How to cite Párraga, V. M. (2025). From efficiency to exclusion rethinking AI governance in public institutions. Journal of Law and Epistemic Studies, 3(2), 19-25. https://doi.org/10.5281/zenodo.15958621
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 20 Introduction The increasing integration of artificial intelligence (AI) systems into public administration is restructuring conven- tional paradigms of decision-making, oversight, and policy execution at various institutional levels. The discourse of en- hanced efficiency, objectivity, and cost-effectiveness drives this technological shift. However, it simultaneously gener- ates complex ethical, legal, and sociopolitical challenges, particularly concerning the potential adverse impacts of al- gorithmic systems on the practical realization and safeguard- ing of human rights, most notably about discriminatory prac- tices and institutional accountability mechanisms. Contrary to the premise of algorithmic neutrality, em- pirical evidence consistently demonstrates that AI systems frequently replicate and exacerbate entrenched structural bi- ases embedded within the historical datasets used for train- ing. This results in disproportionate harm to vulnerable and marginalized populations. Such phenomena are encapsulat- ed under the concept of algorithmic discrimination, which refers to the deployment of automated decision-making processes that produce systematically inequitable outcomes based on sensitive attributes such as race, gender, age, reli- gion, or socioeconomic status (Coitinho & Olivier da Silva, 2024; Falletti, 2023). Within the context of public governance, this issue acquires critical relevance insofar as algorithmically mediated deci- sions increasingly determine access to fundamental rights and public goods—such as healthcare, justice, education, social welfare, and public safety—through mechanisms that often lack transparency, explainability, and external audit- ability. As highlighted by Alon-Barkat and Busuioc (2023), algorithms in public administration predominantly function as decision-support tools, yet they remain susceptible to in- troducing bias when civil servants adopt algorithmic outputs uncritically or selectively. These dynamics, conceptualized as automation bias and selective adherence, intensify the risk of discriminatory administrative outcomes, particular- ly when algorithmic recommendations reinforce preexisting sociocultural stereotypes and prejudices. These types of biases have been extensively documented. The COMPAS algorithm, used in the United States to pre- dict criminal recidivism, has been identified as exhibiting a systematic racial bias against African American individ- uals. ProPublica revealed that the system assigned higher risk scores to Black defendants, even when they had few- er prior offenses than White individuals under comparable conditions (Lendvai & Gosztonyi, 2025). Similarly, in the Netherlands, the scandal involving the discriminatory use of algorithms in the allocation of childcare subsidies led to the resignation of an entire cabinet. This case underscored how automation, far from being neutral, can institutionalize struc- tural inequalities (Alon-Barkat & Busuioc, 2023). From an ethical perspective, scholars such as Teo (2024) have proposed interpreting these harms through the concept of “slow violence”, defined as a gradual, cumulative, and of- ten invisible process that erodes the foundational pillars of the human rights framework—undermining core principles such as human dignity, privacy, equality, and freedom of ex- pression. At the normative level, the international regulatory re- sponse remains fragmented. Despite initiatives such as the European Union’s proposed Artificial Intelligence Act, most existing legal frameworks remain limited in their ability to oversee opaque machine learning systems (black-box mod- els) and to ensure effective redress mechanisms for affected individuals (Lendvai & Gosztonyi, 2025; Falletti, 2023). As Lendvai and Gosztonyi (2025, p. 9) observe, “the absence of robust technical and ethical standards in the implementation of regulatory frameworks significantly constrains their effec- tiveness in protecting against systemic algorithmic risks.” In Ecuador, for example, three legislative bills were intro- duced in 2024—including the most ambitious, the Organic Law on the Regulation and Promotion of Artificial Intelli- gence—which proposes to classify AI systems according to risk level, establish mandatory audits, and create a national regulatory authority for artificial intelligence (Asamblea Na- cional del Ecuador, 2024). Accordingly, the objective of this article is to critically examine how algorithmic systems implemented in pub- lic administration may give rise to discriminatory practic- es that are incompatible with the respect for human rights. This is approached through an interdisciplinary framework that integrates legal analysis, behavioral public administra- tion, and the ethics of technology. Specifically, the study aims to identify the mechanisms through which automated decision-making processes can undermine the principles of equality and non-discrimination. It also seeks to assess the existing regulatory responses and their limitations. Further- more, the article proposes a set of guidelines for algorithmic governance that prioritizes transparency, justice, and institu- tional accountability within the public sector. Methodology This study employs a qualitative, interpretive methodolo- gThis study employed a qualitative, exploratory, and criti- cal approach, grounded in specialized and interdisciplinary documentary analysis. Given the emerging and cross-disci- plinary nature of the research object—algorithmic discrimi- nation within the domain of public administration—a metho- dological design was selected that allows for the integration of conceptual frameworks from law, behavioral public ad- ministration, technology ethics, and the behavioral sciences.
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 21 The methodological strategy was structured into three complementary phases. The study involved a systematic re- view of scientific literature using recognized academic da- tabases, including Scopus, Web of Science, SpringerLink, Oxford Academic, and Google Scholar. The inclusion crite- ria comprised articles published between 2018 and 2025, in- cluding both empirical and theoretical studies that addressed the intersection of artificial intelligence, algorithmic discri- mination, human rights, and public administration, as well as their relevance and citation within the scientific community. The analysis also incorporated regulatory documents from international organizations, including the European Union’s proposed Artificial Intelligence Act (AI Act), the General Data Protection Regulation (GDPR), reports from the Coun- cil of Europe, and emblematic case studies such as the COM- PAS algorithm in the United States and the childcare subsidy fraud scandal in the Netherlands. This study also examined Ecuador’s 2024 legislative ini- tiatives, which propose risk-based AI classification systems, transparency requirements, and the creation of a national AI regulatory authority. Using qualitative thematic content analysis, the documentary corpus was analyzed through an axial coding strategy to identify patterns across three core categories: manifestations of algorithmic discrimination and systemic bias; impacts on fundamental rights such as equali- ty, privacy, due process, and human dignity; and regulatory gaps alongside emerging models of algorithmic governance. The analytical framework was based on the theory of “slow violence” developed by Nixon (2011), as adapted to the field of AI ethics by Teo (2024), along with the concepts of automation bias and selective adherence formulated by Alon-Barkat and Busuioc (2023, 2024). The extracted data were organized into a comparative analysis matrix, which enabled a systematic comparison between empirical findings and both existing and proposed legal frameworks. This phase aimed to produce a critical and constructive synthesis, focusing on identifying regulatory challenges, ethical dilemmas, and potential courses of action for a model of algorithmic governance grounded in human rights principles. This methodological design enables a rigorous examina- tion of the phenomenon in its multidimensional complexity, providing an in-depth analysis of the current risks posed by artificial intelligence in the public sector from a perspective centered on protecting the most vulnerable populations. Table 1 synthesizes key findings from diverse sources— ranging from investigative reports and academic studies to legal and regulatory frameworks—highlighting how they address algorithmic discrimination, impacts on fundamental rights, and existing gaps in governance models. The sources referenced in the matrix—including investi- gative reports, regulatory frameworks, and academic arti- cles—are supported by publicly accessible and widely re- cognized documentation. Both the COMPAS case and the Dutch childcare benefits scandal serve as emblematic exam- ples of how algorithms deployed in the public sector can re- plicate systemic biases and produce severe consequences for vulnerable populations. Results and discussion The documentary analysis conducted reveals that the use of artificial intelligence (AI) in public administration has sig- nificant implications for human rights, particularly regarding the principles of equality, non-discrimination, administrative transparency, and adequate judicial protection. The findings, drawn from more than twenty academic, regulatory, and empirical sources, were organized around three interrelated core dimensions: (1) the algorithmic reproduction of struc- tural inequalities; (2) uncritical automation and its impact on institutional guarantees, and (3) regulatory gaps in address- ing the complexity of algorithmic discrimination. Each of these dimensions is discussed below in terms of the conceptual and empirical frameworks identified in the reviewed literature. One of the study’s most significant findings is that AI systems implemented in public administration do not elimi- nate human biases; instead, in many instances, they amplify them. Various empirical studies have substantiated this ob- servation. In the case of the COMPAS algorithm in the Unit- ed States, which is used to assess the risk of criminal recid- ivism, it was found that African American individuals were classified as high risk at a disproportionately higher rate than White individuals, even when their criminal histories were similar (Angwin et al., 2016; Lendvai & Gosztonyi, 2025). This practice undermines not only the principle of formal equality before the law but also the expectation of fair and objective treatment by the justice system. Similarly, the Dutch childcare benefits scandal, extensive- ly analyzed by Alon-Barkat and Busuioc (2023), serves as a paradigmatic case of how an algorithm, by relying on proxy variables such as nationality or place of residence, ultimate- ly flagged thousands of migrant families as suspected fraud- sters. The extent of the harm was so severe that it triggered ministerial resignations and a parliamentary inquiry. These cases exemplify what Coitinho and Olivier da Sil- va (2024) refer to as algorithmic injustice, defined as “the production of structural harm to vulnerable groups through automated decisions that reinforce historical inequalities un- der the guise of objectivity” (p. 3). Far from being isolated anomalies, these patterns reflect a systemic mode of exclu- sion driven by digitalization in the absence of clear ethical and legal safeguards.
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 22 Moreover, the risk does not lie solely in the data but also in the design choices embedded within algorithmic systems. As Falletti (2023) observes, algorithms “follow human instruc- tions and reflect the interests, prejudices, or priorities of their designers and of the historical data that feeds them” (p. 395). This observation is crucial to understanding that bias is not merely a technical malfunction, but rather the result of the broader social and institutional contexts in which such sys- tems are constructed. The second axis of analysis reveals a concerning trend among public officials to delegate their decision-making re- sponsibilities to algorithms without critical scrutiny, placing undue trust in their outputs even when faced with indications of error or injustice. This phenomenon has been described by Alon-Barkat and Busuioc (2023, 2024) as automation bias, a cognitive bias whereby civil servants relinquish their deliberative role in favor of systems perceived as objective and impartial. Through three experimental studies conduct- ed in the Netherlands, the authors demonstrate that even in contexts previously marked by bias-related scandals, public officials exhibit a strong tendency to trust algorithmic rec- ommendations. Furthermore, they identify the existence of selective ad- herence, a more subtle form of bias in which officials adopt algorithmic recommendations when they align with their prejudices or stereotypes about specific social groups. This reinforces the structural dimension of discrimination and delegitimizes the algorithm’s supposed role as a neutralizing mechanism for human bias. As the authors point out: “The algorithm does not replace the public decision-maker; it in- teracts with their biases and social conditioning” (Alon-Bar- kat & Busuioc, 2023, p. 164). Table 1. Comparative analysis of sources addressing algorithmic discrimination, rights impacts, and governance gaps Document/ Source Type of Source Algorithmic discrimination and systemic biases Impact on fundamental rights Regulatory gaps and governance models ProPublica (2016) – Machine Bias (COMPAS) Investigative report Racial bias in risk scores disproportionately labels Black individuals as "high risk." Violation of the principles of equality and presumption of innocence Absence of effective external auditing mechanisms AI Act – EU (2021) Regulatory proposal Recognizes risk of bias; classifies systems as “high- risk” Establishes transparency and human oversight requirements Preventive model focused on ex ante risk assessment Teo (2024) – AI Ethics Academic article Introduces the concept of “slow violence” in technological contexts Cumulative harm to dignity, privacy, and freedom of expression Proposes governance frameworks with citizen participation Alon-Barkat & Busuioc (2023) Empirical study Automation bias and selective adherence reinforcing systemic bias Uncritical reliance on algorithms undermines due process Lack of clear standards for responsible use by public officials Netherlands Scandal – Childcare Subsidies Institutional case Discriminatory profiling based on ethnic origin in automated decisions Massive violations of social rights and reputational harm Cabinet resignation illustrates regulatory failure GDPR – EU (2016) Legal instrument Does not explicitly address AI-related bias Ensures rights to information, rectification, and objection Requires meaningful human control in automated decisions Falletti (2023) – Legal Analysis Theoretical article Classifies types of bias in public sector AI systems Links bias to erosion of legal certainty and the rule of law Criticizes the lack of enforcement mechanisms
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 23 This phenomenon also generates a fundamental tension with core principles such as due process, the right to a rea- soned decision, and effective access to appeal mechanisms. As Teo (2024) argues, the inherent opacity of many AI mod- els—particularly those based on deep learning—prevents individuals affected by a decision from understanding how it was made. This constitutes a form of what he terms “slow vi- olence”: a gradual, cumulative, and often invisible harm that erodes citizens’ capacity to challenge algorithmic authority. Finally, the analysis reveals that most existing legal frame- works, including some of the most advanced instruments such as the European Union’s AI Act, are not adequately equipped to address the complex challenges posed by algo- rithmic discrimination. Although regulations like the Gen- eral Data Protection Regulation (GDPR) incorporate prin- ciples such as explainability and data minimization, their implementation in automated environments remains limited (Falletti, 2023; Lendvai & Gosztonyi, 2025). The AI Act, for its part, introduces a risk-based classifi- cation of AI systems and prohibits specific high-risk appli- cations. However, it leaves significant gaps concerning the oversight of algorithms already deployed in the public sector, the involvement of affected individuals, and the availability of accessible redress mechanisms. As Lendvai and Gosztonyi (2025, p. 11) warn, “dominant regulatory approaches remain anchored in a technocratic and formalistic vision, failing to account for the structural conditions of social exclusion that algorithms may reinforce.” At the national level, Ecuador has made progress through the introduction of relevant legislative proposals. The most prominent, introduced on June 20, 2024, draws upon Euro- pean models by classifying AI systems according to risk, prohibiting specific high-risk applications, and proposing the creation of a National AI Regulatory Authority. Howev- er, the draft legislation still lacks clarity regarding the defini- tion of institutional mandates and coordination mechanisms with entities such as the Ombudsman’s Office and the Data Protection Superintendency. In the absence of explicit norms regarding explainability, institutional accountability, and citizen oversight, public ad- ministration runs the risk of becoming a space of automat- ed decision-making devoid of democratic control, thereby undermining the principles of legality, accountability, and equal access to rights. In response to the identified challenges related to algo- rithmic discrimination, opacity, and weak accountability in public administration, we propose a series of integrated and actionable measures designed to enhance transparency, strengthen institutional oversight, and facilitate access to ef- fective remedies. First, we recommend implementing independent and con- tinuous algorithmic audits. These should be based on clearly defined technical and social standards that evaluate algorith- mic bias, human rights impact, and system explainability. Such audits should be conducted both ex ante and ex post, and their results must be publicly accessible, allowing for citizen observation and feedback mechanisms that promote accountability. Second, it is essential to institutionalize algorithmic im- pact assessments (AIA), modeled on international best practices such as those proposed in the European Union’s AI Act. These assessments should include mandatory public consultation prior to the deployment of high-risk systems, as well as mechanisms for continuous monitoring and regu- lar review cycles to ensure compliance with evolving ethical and legal standards. Third, to mitigate automation bias and selective adherence, public institutions must strengthen meaningful human over- sight. This entails providing specialized training for public officials to evaluate algorithmic recommendations and make informed, autonomous decisions critically. Additionally, in- ternal protocols should be established to ensure that all auto- mated decisions undergo a mandatory human review before implementation. Fourth, promoting accessible and understandable transpar- ency is vital. Algorithmic models must be designed with ex- plainability in mind, incorporating user interfaces that trans- late complex decisions into plain language accessible to the individuals affected. Periodic transparency reports should also be published, detailing system performance, detected biases, and corrective measures applied. Fifth, we emphasize the creation of effective redress mechanisms. Specific administrative and judicial pathways should be established to allow individuals to contest algo- rithmic decisions. Furthermore, governments should create independent digital ombuds offices or appoint algorithmic commissioners responsible for investigating complaints and ensuring remedies are delivered in a timely and fair manner. Sixth, active citizen participation should be embedded into AI governance. This includes the formation of citizen oversight committees and the organization of public hear- ings to monitor the use of AI in state institutions. Civil so- ciety organizations must also be actively involved in both the design and evaluation phases of algorithmic governance frameworks to ensure that pluralistic and inclusive perspec- tives are represented. Finally, in the specific case of Ecuador, it is crucial to in- stitutionalize public consultation and participatory mecha- nisms within the legislative process for AI regulation. The 2008 Constitution, the Law on Citizen Participation, and the mandate of the Council for Citizen Participation and Social Control (CPCCS) provide a robust legal framework
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 24 for incorporating public input. Leveraging these tools will strengthen democratic legitimacy, foster trust, and help en- sure that AI technologies serve the public interest rather than entrenching new forms of exclusion. Evidence suggests that the use of artificial intelligence in public administration tends to perpetuate and exacerbate structural inequalities. When decision-making is delegated to algorithmic systems without adequate controls, institu- tional safeguards, and fundamental rights are undermined. To mitigate these impacts, it is essential to strengthen ex- isting regulatory frameworks, promote transparency, ensure meaningful human oversight, establish effective redress mechanisms, and guarantee active citizen participation. Only through these measures can algorithmic governance be steered toward a more democratic, fair, and inclusive model of public administration. Conclusions This study demonstrates that the application of artificial intelligence in public administration does not guarantee effi- ciency, objectivity, or neutrality, and without robust legal and ethical safeguards, it can exacerbate inequalities affec- ting vulnerable groups. A review of empirical, regulatory, and academic sources reveals that algorithmic systems may reproduce racial, ethnic, socioeconomic, and gender biases, undermining equality before the law, transparency, and due process, mainly when automation bias and overreliance on perceived infallibility occur. The lack of robust national and international regulation limits the prevention and redress of harms, making it necessary to combine technical rules with citizen participation mechanisms, independent audits, al- gorithmic impact assessments, and ethical design controls. Frameworks such as the UNESCO Recommendation on the Ethics of AI, the Council of Europe’s AI Convention, the EU AI Act, and the GDPR provide guidance, but require adap- tation to ensure governance models that are rooted in human rights, inclusion, transparency, accountability, and public oversight. References Alon-Barkat, S., & Busuioc, M. (2023). Human–AI inter- actions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and The- ory, 33(1), 153–169. https://doi.org/10.1093/jopart/ muac007 Alon-Barkat, S., & Busuioc, M. (2024). Public adminis- tration meets artificial intelligence: Towards a mean- ingful behavioral research agenda on algorithmic decision-making in government. Journal of Behav- ioral Public Administration, 7(1), 1–19. https://doi. org/10.30636/jbpa.71.261 Amnesty International. (2021, 14 de octubre). Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal. https://www.amnesty.org/en/latest/news/2021/10/xeno- phobic-machines-dutch-child-benefit-scandal Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Ma- chine bias: Software is used across the country to predict future criminal behavior. Moreover, it is not very objec- tive against blacks [Investigación]. ProPublica. https:// www.propublica.org/article/machine-bias-risk-assess- ments-in-criminal-sentencing Asamblea Nacional del Ecuador. (2024). Proyecto de Ley Orgánica de Regulación y Promoción de la Inteligencia Artificial en Ecuador (Expediente N.º 450889). https:// www.asambleanacional.gob.ec/es/multimedios-legis- lativos/97303-proyecto-de-ley-organica-de-regulacion Baraniuk, C., & Wang, J. (2024). Strengthening legal pro- tection against discrimination by algorithms: A critical review of current European norms. Human Rights Law Review, 24(2), 345–372. https://doi.org/10.1080/13642 987.2020.1743976 Coitinho, D., & Olivier da Silva, A. L. (2024). Algorith- mic injustice and human rights. Unisinos Journal of Philosophy, 25(1), e25109. https://doi.org/10.4013/ fsu.2024.251.09 Council of Europe. (2024). Framework Convention on Ar- tificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225). Adoptada el 17 de mayo de 2024 y abierta a firma el 5 de septiembre de 2024. https://www.coe.int/en/web/artificial-intelli- gence/the-framework-convention-on-artificial-intelli- gence European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX:52021PC0206 European Data Protection Board. (2016). General Data Protection Regulation (GDPR) RGPD – UE. https:// gdpr-info.eu/ Falletti, E. (2023). Algorithmic discrimination and privacy protection. Journal of Digital Technologies and Law, 1(2), 387–420. https://doi.org/10.21202/jdtl.2023.16 Falletti, T. (2023). Algorithmic governance and regulatory gaps: A legal analysis. Universidad Complutense Jour- nal of Law and Technology, 17(3), 201–220. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People— An ethical framework for a good AI society: Opportu- nities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. https://doi.org/10.1007/ s11023-018-9482-5 Jones, L., & Dewar, R. (2023). Ethics and discrimination in AI enabled recruitment: Technical and managerial solu- tions. Palgrave Communications, 9, Article 112. https:// doi.org/10.1057/s41599-023-02079-x
J. Law Epistemic Stud. (July - December 2025) 3(2): 19-25 25 Kim, S., & Lee, H. (2024). Algorithmic discrimination: Ex- amining its types and regulatory challenges. Frontiers in Artificial Intelligence, 7, Article 1320277. https://doi. org/10.3389/frai.2024.1320277 Lendvai, G. F., & Gosztonyi, G. (2025). Algorithmic bias as a core legal dilemma in the age of artificial intelligence: Conceptual basis and the current state of regulation. Laws, 14(3), 41. https://doi.org/10.3390/laws14030041 Müller, F., & Schmidt, T. (2024). Automation bias in pub- lic administration: An interdisciplinary perspective from law and psychology. Government Informa- tion Quarterly, 41, 101797. https://doi.org/10.1016/j. giq.2022.101797 Nixon, R. (2011). Slow violence and the environmentalism of the poor. Harvard University Press. https://www.hup. harvard.edu/books/9780674072343 Patel, R., & González, M. (2023). Bias and discrimination in machine learning–based administrative decision mak- ing systems. Policy & Internet, 15(4), 789–808. https:// doi.org/10.1016/S0267-3649(24)00136-5 Santos, F. A., & Palhares, D. (2023). Artificial intelligence and human rights: Brazilian perspectives on regulation and fairness. Humanities and Social Sciences Com- munications, 10, Article 112. https://doi.org/10.1057/ s41599-023-02079-x Teo, S. A. (2024). Artificial intelligence and its ‘slow vio- lence’ to human rights. AI and Ethics, 5, 2265–2280. https://doi.org/10.1007/s43681-024-00547-x Tissera, M. G., & Recalde, M. (2022). La gobernanza de la inteligencia artificial: Desafíos éticos y jurídicos para América Latina. Revista Iberoamericana de Ciencia, Tecnología y Sociedad, 17(51), 185–208. https://doi. org/10.22430/22565337.185 UNESCO. (2023). Recomendación sobre la ética de la in- teligencia artificial. https://www.unesco.org/es/articles/ recomendacion-sobre-la-etica-de-la-inteligencia-artifi- cial Véliz, C. (2023). Algorithmic fairness: Lessons from political philosophy. Frontiers in Artificial Intelligence, 7, Arti- cle 1320277. https://doi.org/10.3389/frai.2023.1320277 Conflicts of interest The author declares that she has no conflicts of interest. Author contributions Vielka M. Párraga: Conceptualization, data curation, for- mal analysis, investigation, methodology, supervision, va- lidation, visualization, drafting the original manuscript and writing, review, and editing. Data availability statement The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Statement on the use of AI The author acknowledges the use of generative AI and AI-assisted technologies to improve the readability and cla- rity of the article. Disclaimer/Editor’s note The statements, opinions, and data contained in all publi- cations are solely those of the individual authors and con- tributors and not of Journal of Law and Epistemic Studies. Journal of Law and Epistemic Studies and/or the editors disclaim any responsibility for any injury to people or pro- perty resulting from any ideas, methods, instructions, or pro- ducts mentioned in the content.