BETWEEN TRUST AND UNCERTAINTY: HOW STUDENTS CONSTRUCT ETHICAL BOUNDARIES IN AI-DRIVEN LEARNING

Authors

  • Omar Alobud Assistant Professor, College of Science and Health Professions, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center (KAIMRC), Ministry of National Guard - Health Affairs https://orcid.org/0009-0004-7541-5873

DOI:

https://doi.org/10.18623/rvd.v22.n4.3741

Keywords:

AI in Education, Academic Trust, Student Ethics, Authorship, Emotional Burden

Abstract

As artificial intelligence becomes more embedded in academic settings, questions of trust, authorship, and ethical responsibility become increasingly urgent—especially in contexts where institutional policy is vague or absent. This qualitative study explores how undergraduate students interpret and navigate the ethical use of AI tools in learning environments lacking clear guidelines. Twelve students participated in semi-structured interviews focused on their perceptions of fairness, authorship, and moral boundaries when engaging with AI technologies. Thematic analysis revealed three key patterns shaping students’ trust and use of AI: reading institutional signals to interpret what is implicitly allowed, managing emotional risks such as guilt or anxiety in the absence of policy clarity, and maintaining personal authority over their academic work despite AI involvement. Rather than relying solely on rules, students constructed their own frameworks for responsible use—often guided by emotional cues, peer discussion, and personal values. These findings suggest that ethical AI literacy requires more than technical competence; it demands shared dialogue, emotional safety, and participatory policy-making. The study calls for institutions to move beyond compliance models and engage students as co-authors of ethical practice in AI-augmented education.

References

Adams, C., Pente, P., Lemermeyer, G., Turville, J., & Rockwell, G. (2022). Artificial intelligence and teachers’ new ethical obligations. The International Review of Information Ethics, 31(1). https://doi.org/10.29173/irie483

Ahuja, A. S., Polascik, B. W., Doddapaneni, D., Byrnes, E. S., & Sridhar, J. (2023). The digital metaverse: Applications in artificial intelligence, medical education, and integrative health. Integrative Medicine Research, 12(1), 100917. https://doi.org/10.1016/j.imr.2022.100917

Berendt, B. (2019). AI for the common good?! Pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics, 10(1), 44–65. https://doi.org/10.1515/pjbr-2019-0004

Bullock, J. B., Pauketat, J. V. T., Huang, H., Wang, Y.-F., & Anthis, J. R. (2025). Public opinion and the rise of digital minds: Perceived risk, trust, and regulation support. arXiv preprint. https://doi.org/10.48550/arXiv.2504.21849

Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human-Computer Interaction, 38(6), 1–15. https://doi.org/10.1080/10447318.2022.2050543

Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148

Das, B. C., Amini, M. H., & Wu, Y. (2024). Security and privacy challenges of large language models: A survey. arXiv preprint arXiv:2402.00888. https://doi.org/10.48550/arXiv.2402.00888

Ding, J., Li, B., Xu, C., Qiao, Y., & Zhang, L. (2023). Diagnosing crop diseases based on domain-adaptive pre-training BERT of electronic medical records. Applied Intelligence, 53(12), 15979–15992. https://doi.org/10.1007/s10489-022-04346-x

EP Committee. (2021). European Parliament resolution of 19 May 2021 on artificial intelligence in education, culture and the audiovisual sector (2020/2017 (INI)) [Resolution]. European Parliament. https://www.europarl.europa.eu/doceo/document/TA-9-2021-0238_EN.html

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Kahneman, D., & Egan, P. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. https://writemac.com/wp-content/uploads/2021/07/Thinking-Fast-and-Slow.pdf

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Kieslich, K., Keller, B., & Starke, C. (2022). Artificial intelligence ethics by design: Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data & Society, 9(1), 20539517221092956. https://doi.org/10.1177/20539517221092956

Lai, T., Xie, C., Ruan, M., Wang, Z., Lu, H., & Fu, S. (2023). Influence of artificial intelligence in education on adolescents’ social adaptability: The mediatory role of social support. PLoS ONE, 18(3), e0283170. https://doi.org/10.1371/journal.pone.0283170

Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press. https://books.google.com

Li, Y., Zhu, Y., & Fan, X. (2024). Exploration and enlightenment of adolescent artificial intelligence ethics education: A case study of MIT. Modern Distance Education, 1–13. https://chn.oversea.cnki.net/kcms/detail/detail.aspx?filename=YUAN202401001&dbcode=CJFQ&dbname=CJFDLAST2024

Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., … Ge, B. (2023). Summary of ChatGPT-related research and perspective towards the future of large language models. Meta-Radiology, 100017. https://doi.org/10.1016/j.metrad.2023.100017

Mutimukwe, C., Viberg, O., Oberg, L. M., & Cerratto-Pargman, T. (2022). Students’ privacy concerns in learning analytics: Model development. British Journal of Educational Technology, 53(4), 932–951. https://doi.org/10.1111/bjet.13234

NSW Government. (2023). Australian framework for generative artificial intelligence in schools: Consultation paper. NSW Government. https://education.nsw.gov.au/content/dam/main-education/about-us/strategies-and-reports/consultation-items/AI_Consultation_Paper.pdf

Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism & Mass Communication Educator, 78(1), 84–93. https://doi.org/10.1177/10776958221149577

Pratama, M. P., Sampelolo, R., & Lura, H. (2023). Revolutionizing education: Harnessing the power of artificial intelligence for personalized learning. Klasikal: Journal of Education, Language Teaching and Science, 5(2), 350–357. https://doi.org/10.52208/klasikal.v5i2.877

Qin, A., Jingmei, Y., Xiaoshu, X., Yunfeng, Z., & Huanhuan, Z. (2024). Decoding AI ethics from users’ lens in education: A systematic review. Heliyon, 10(20), e39357. https://doi.org/10.1016/j.heliyon.2024.e39357

Redecker, C. (2017). European framework for the digital competence of educators: DigCompEdu (No. JRC107466). Joint Research Centre (Seville site). https://ideas.repec.org/p/ipt/iptwpa/jrc107466.html

Smith, V., Shamsabadi, A. S., Ashurst, C., & Weller, A. (2023). Identifying and mitigating privacy risks stemming from language models: A survey. arXiv preprint arXiv:2310.01424. https://doi.org/10.48550/arXiv.2310.01424

UNESCO. (2019). Beijing consensus on artificial intelligence and education. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000368303

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137

Wang, N., Wang, X., & Su, Y. S. (2024). Critical analysis of the technological affordances, challenges and future directions of generative AI in education: A systematic review. Asia Pacific Journal of Education, 1–17. https://doi.org/10.1080/02188791.2024.2305156

Downloads

Published

2025-11-21

How to Cite

Alobud, O. (2025). BETWEEN TRUST AND UNCERTAINTY: HOW STUDENTS CONSTRUCT ETHICAL BOUNDARIES IN AI-DRIVEN LEARNING. Veredas Do Direito, 22, e223741. https://doi.org/10.18623/rvd.v22.n4.3741