Criminal liability for the use of Deep Learning in crimes committed against minors
Main Article Content
Abstract
Introduction: the development of generative Deep Learning introduced new forms of risk in the digital environment, particularly in relation to the production and dissemination of synthetic sexual content that affects children and adolescents. These technologies allow for the creation of highly realistic representations without the need for prior material fact, posing significant challenges for criminalization, attribution of responsibility, and evidentiary assessment. Objectives: this article analyzes the criminal responsibility derived from the use of Deep Learning in crimes against minors from a legal-analytical approach with a socio-technical basis, aimed at evaluating the sufficiency of the Ecuadorian criminal framework in the face of emerging technological risks. Methodology: through a normative-comparative analysis and the critical examination of the evidentiary challenges associated with synthetic digital evidence, typification gaps and tensions with the principles of legality and culpability are identified. Results: existence of a regulatory vacuum in Ecuador, ineffectiveness of the traditional penal model against crimes with AI. Conclusions: finally, normative guidelines and technical guidelines are proposed to strengthen an effective, safeguarding criminal response focused on the comprehensive protection of children against the illicit uses of artificial intelligence. General area of study: Social Sciences. Specific area of study: Jurisprudence. Type of item: original.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
References
Asamblea Nacional del Ecuador. (2021). Ley Orgánica de Protección de Datos Personales. Registro Oficial Suplemento No. 459. https://www.telecomunicaciones.gob.ec/wp-content/uploads/2021/06/Ley-Organica-de-Datos-Personales.pdf
Banakar, R., & Travers, M. (2005). Theory and method in socio-legal research. Bloomsbury Publishing. https://www.researchgate.net/publication/228262192_Theory_and_Method_in_Socio-Legal_Research
Berman, M. N. (2011). Constitutional Interpretation: Non‐originalism. Philosophy Compass, 6(6), 408-420. https://philpapers.org/rec/BERCIN
Brownsword, R. (2022). Law, technology and society: Re-imagining the regulatory environment. Routledge. https://www.routledge.com/Law-Technology-and-Society-Reimagining-the-Regulatory-Environment/Brownsword/p/book/9780815356462
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Apollo - University of Cambridge Repository. https://www.repository.cam.ac.uk/items/d654418d-1c12-4024-85d5-ccd614c32ef3
Casey, E. (2011). Digital evidence and computer crime: Forensic science, computers, and the Internet (3rd ed.). Academic Press. https://rishikeshpansare.wordpress.com/wp-content/uploads/2016/02/digital-evidence-and-computer-crime-third-edition.pdf
Cheng, L., Liu, F., & Yao, D. (2017). Enterprise data breach: causes, challenges, prevention, and future directions. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(C):e1211. https://www.researchgate.net/publication/318152978_Enterprise_data_breach_causes_challenges_prevention_and_future_directions_Enterprise_data_breach
Chesney, R., & Citron, D. K. (2019). Deep fakes: a looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1819. https://doi.org/10.15779/Z38RV0D15J
Consejo de Europa. (2024). Framework convention on artificial intelligence and human rights, democracy and the rule of law (CETS No. 225). https://www.coe.int
Council of Europe. (2024). Framework convention on artificial intelligence. https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
De Londras, F., & Mullally, S. (2020). Irish yearbook of international law: Volume 13,2018. Hart Publishing. https://research.birmingham.ac.uk/en/publications/irish-yearbook-of-international-law-volume-13-2018/
Department for Science, Innovation & Technology. (2025). New law to tackle AI child abuse images at source as reports more than double. https://www.gov.uk/government/news/new-law-to-tackle-ai-child-abuse-images-at-source-as-reports-more-than-double
European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People-an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27. https://papers.nips.cc/paper_files/paper/2014/hash/f033ed80deb0234979a61f95710dbe25-Abstract.html
Hildebrandt, M. (2020). Law for computer scientists and other folk. Oxford University Press. https://www.cohubicol.com/assets/uploads/law_for_computer_scientists.pdf
Hirschl, R. (2020). Comparative matters: The renaissance of comparative constitutional law. Oxford University Press. https://www.scjn.gob.mx/relaciones-institucionales/sites/default/files/page/2021-02/Resen%CC%83a_%20Comparative%20Matters%2C%20The%20Renaissance%20of%20Comparative%20Constitutional%20Law.pdf
Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems. https://arxiv.org/abs/2006.11239
International Organization for Standardization (ISO). (2012). ISO/IEC 27037:2012 Guidelines for identification, collection, acquisition and preservation of digital evidence (edition 1). https://www.iso.org/standard/44381.html
Internet Watch Foundation. (2024). Artificial intelligence and child sexual abuse material. https://www.iwf.org.uk/annual-data-insights-report-2024/data-and-insights/ai-generated-child-sexual-abuse/
Kent, K., Chevalier, S., Grance, T., & Dang, H. (2006). Guide to integrating forensic techniques into incident response (NIST SP 800-86). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-86
Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: people cannot detect deepfakes but think they can. iScience, 24(11), 103364. https://doi.org/10.1016/j.isci.2021.103364
McCrudden, C. (2019). Understanding human dignity. Oxford University Press. https://pure.qub.ac.uk/en/publications/understanding-human-dignity/
Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: a survey. ACM Computing Surveys, 54(1), 1-41. https://doi.org/10.1145/3425780
National Center for Missing & Exploited Children. (2025). Spike in online crimes against children a “wake-up call.” https://www.missingkids.org/blog/2025/spike-in-online-crimes-against-children-a-wake-up-call
National Institute of Standards and Technology [NIST]. (2023). Artificial intelligence risk management framework (AI RMF 1.0). https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
Nieto Martín, A. (2013). La responsabilidad penal de las personas jurídicas: oportunidades y retos para la cooperación judicial. Armonización penal en Europa. https://dialnet.unirioja.es/servlet/articulo?codigo=6378120
Oficina Europea de Policía [Europol]. (2025). The changing DNA of serious and organized crime: EU serious and organized crime threat assessment (SOCTA 2025). https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf
Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura [UNESCO]. (2024). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
Organización para la Cooperación y el Desarrollo Económicos [OECD]. (2019). Recommendation of the council on artificial intelligence. https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
Parlamento Europeo y Consejo de la Unión Europea. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
Presidencia de la República del Ecuador. (2023). Reglamento general a la Ley Orgánica de Protección de Datos Personales. Registro Oficial Suplemento No. 435, Norma 904. https://www.cosede.gob.ec/wp-content/uploads/2023/12/REGLAMENTO-GENERAL-A-LA-LEY-ORG%C3%81NICA-DE-PROTECCION-DE-DATOS-PERSONALES_compressed-1.pdf
Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Niessner, M. (2019). FaceForensics++: Learning to detect manipulated facial images. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 1–11. IEEE. https://doi.org/10.1109/ICCV.2019.00009
Saltelli, A., & Funtowicz, S. (2004). The precautionary principle: implications for risk management strategies. International Journal of Occupational Medicine and Environmental Health, 17(1), 47–57. https://pubmed.ncbi.nlm.nih.gov/15212206/
Sarker, I. H. (2021). Deep learning: a comprehensive overview. SN Computer Science, 2(6), 420. https://doi.org/10.1007/s42979-021-00815-1
Silva Sánchez, J. M. (2011). La expansion del derecho penal (2.ª ed.). Civitas. https://es.scribd.com/doc/119471893/La-Expansion-del-Derecho-Penal-Jesus-Silva-Sanchez
Tolosana, R., Vera-Rodríguez, R., Fierrez, J., Morales, A., & Ortega-García, J. (2020). Deepfakes and beyond: a survey. Information Fusion, 64, 131–148. https://doi.org/10.1016/j.inffus.2020.06.014
United Nations Interregional Crime and Justice Research Institute [UNICRI]. (2024). Generative AI: a new threat for online child sexual exploitation and abuse. https://unicri.org/sites/default/files/2024-09/Generative-AI-New-Threat-Online-Child-Abuse.pdf
Verdoliva, L. (2020). Media forensics and deepfakes: an overview. IEEE Journal of Selected Topics in Signal Processing, 14(5), 910–932. https://doi.org/10.1109/JSTSP.2020.3002101
WeProtect Global Alliance. (2025). Global threat assessment. https://www.weprotect.org/global-threat-assessment-25/
Yeung, K. (2018). Algorithmic regulation: a critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158