RESPONSIBLE USE OF AI-GENERATED CONTENT IN VIETNAMESE SCHOLARLY PUBLISHING: EVIDENCE FROM JOURNAL POLICIES AND EDITORIAL PRACTICES

Autores

  • Tran Huu Tuyen Lac Hong University

DOI:

https://doi.org/10.18623/rvd.v23.n1.4029

Palavras-chave:

Generative AI, AIGC, Scholarly Publishing, Journal Policy, Peer Review, Research Integrity, Vietnam, Open Science Governance

Resumo

The rapid diffusion of generative artificial intelligence (GenAI) tools—especially large language models (LLMs)—is reshaping scholarly publishing worldwide. While these tools can support language editing, translation, and workflow efficiency, they also raise integrity risks, including fabricated citations, unverifiable claims, undisclosed ghostwriting, confidentiality breaches in peer review, and contested ownership of AI-assisted outputs. Vietnam’s journal ecosystem is currently navigating internationalization pressures (e.g., indexing and visibility goals) alongside uneven editorial capacity and fragmented policy infrastructure, making it a critical setting for examining responsible governance of AI-generated content (AIGC). This study reports an exploratory policy-and-practice mapping across five Vietnam-affiliated publishing contexts (university-based open access journals, an internationally co-published journal, a defense-related journal, and law/social-science publishing). Using structured qualitative content analysis, we identify shared norms (e.g., “AI cannot be an author,” accountability remains human) but also substantial variation in disclosure requirements, treatment of AI-generated images and references, restrictions on reviewer use of AI tools, and clarity of enforcement mechanisms. Building on these findings and international literature, we propose a Vietnam-tailored governance framework that combines (i) risk-tiered allowable uses, (ii) mandatory disclosure and provenance documentation, (iii) human-in-the-loop editorial controls, and (iv) capacity-building measures aligned with open science principles. The paper contributes practical templates (disclosure language, policy clauses, and a workflow-integrated checklist) to support journals, editors, and research institutions seeking credible, implementable AI governance.

Referências

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), e35179. https://doi.org/10.7759/cureus.35179

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 610–623. https://doi.org/10.1145/3442188.3445922

Bornmann, L. (2014). Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of Informetrics, 8(4), 895–903. https://doi.org/10.1016/j.joi.2014.09.005

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M., Al-Busaidi, K. A., Balakrishnan, J., Barlette, Y., Bresciani, S., Chatterjee, S., ... Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences, 109(42), 17028–17033. https://doi.org/10.1073/pnas.1212247109

Farber, S., Schulte, P., & Neumann, S. (2025). Comparing human and AI expertise in the academic peer review process. Higher Education Research & Development. https://doi.org/10.1080/07294360.2024.2445575

Ganjavi, C., et al. (2024). Publishers’ and journals’ instructions to authors on use of generative AI. BMJ, 384, e077192. https://doi.org/10.1136/bmj-2023-077192

Garcia, M. B. (2024). Using AI tools in writing peer review reports: Should academic journals embrace the use of ChatGPT? Annals of Biomedical Engineering, 52(2), 139–140. https://doi.org/10.1007/s10439-023-03299-7

He, R., Cao, J., & Tan, T. (2025). Generative artificial intelligence: A historical perspective. National Science Review, 12(5), nwaf050. https://doi.org/10.1093/nsr/nwaf050

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. https://doi.org/10.1038/520429a

Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review, 8, 4. https://doi.org/10.1186/s41073-023-00133-5

Kaebnick, G. E., Magnus, D. C., Allen, A. L., Buchanan, A., Caplan, A., Check Hayden, E., ... Zhang, S. (2023). Editors’ statement on the responsible use of generative AI in scientific communication. Medicine, Health Care and Philosophy 26(4), 499-503. https://doi.org/10.1007/s11019-023-10176-6

Kousha, K., et al. (2024). Artificial intelligence to support publishing and peer review: Opportunities, limitations, and governance needs. Learned Publishing. https://doi.org/10.1002/leap.1570

Laakso, M., Welling, P., Bukvova, H., Nyman, L., Björk, B.-C., & Hedlund, T. (2011). The development of open access journal publishing from 1993 to 2009. PLOS ONE, 6(6), e20961. https://doi.org/10.1371/journal.pone.0020961

Larivière, V., Haustein, S., & Mongeon, P. (2015). The oligopoly of academic publishers in the digital era. PLOS ONE, 10(6), e0127502. https://doi.org/10.1371/journal.pone.0127502

Leung, T. I., Sharma, T., Dash, S., Hall, R., & Tikka, T. (2023). Best practices for using AI tools as an author, peer reviewer, or editor. Journal of Medical Internet Research, 25, e51584. https://doi.org/10.2196/51584

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779

Mugaanyi, J., et al. (2024). Evaluation of large language model performance and citation reliability in academic referencing. Journal of Medical Internet Research, 26, e52935 https://doi.org/10.2196/52935

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., ... Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374

Piwowar, H. A., & Vision, T. J. (2013). Data reuse and the open data citation advantage. PeerJ, 1, e175. https://doi.org/10.7717/peerj.175

Piwowar, H., Priem, J., Larivière, V., Alperin, J. P., Matthias, L., Norlander, B., Farley, A., West, J., & Haustein, S. (2018). The state of OA: A large-scale analysis of the prevalence and impact of open access articles. PeerJ, 6, e4375. https://doi.org/10.7717/peerj.4375

Resnik, D. B. (2025). Disclosing artificial intelligence use in scientific research and writing. Accountability in Research. https://doi.org/10.1080/08989621.2025.2481949

Rojas, A. J., et al. (2024). An investigation into ChatGPT’s application for scientific learning and writing support. Journal of Chemical Education. https://doi.org/10.1021/acs.jchemed.4c00034

Ross-Hellauer, T. (2017). What is open peer review? F1000Research, 6, 588. https://doi.org/10.12688/f1000research.11369.2

Shamseer, L., Moher, D., Maduekwe, O., Turner, L., Barbour, V., Burch, R., ... Shea, B. J. (2017). Potential predatory and legitimate biomedical journals: Can you tell the difference? A cross-sectional comparison. BMC Medicine, 15, 28. https://doi.org/10.1186/s12916-017-0785-9

Santos, S. (2024). The Revisional Legislative Process In Urban Policy And Its Foundations: Democracy And Sustainability. Veredas do Direito, 21, e212459. https://doi.org/10.18623/rvd.v21.2459

Spadotto, A. J. (2023). Analysis And Fundamentation Of Normative Interfaces Between Halal And Organic In Sustainable Food Production. Veredas do Direito, 20, e202528. https://doi.org/10.18623/rvd.v20.2528

Shen, C., & Björk, B.-C. (2015). “Predatory” open access: A longitudinal study of article volumes and market characteristics. BMC Medicine, 13, 230. https://doi.org/10.1186/s12916-015-0469-2

Storey, V. C., et al. (2025). Generative artificial intelligence: Evolving technology and emerging research agenda. Information Systems Frontiers. https://doi.org/10.1007/s10796-025-10581-7

Sun, Y., Ren, Y., & Yuan, J. (2025). Policies, challenges, and countermeasures of using AIGC in academic journal publishing. Chinese Journal of Scientific and Technical Periodicals, 36(2), 144–152. https://doi.org/10.11946/cjstp.202409291084

Tang, A. (2024). The importance of transparency: Declaring the use of generative AI tools. Journal of Nursing Scholarship. https://doi.org/10.1111/jnu.12938

Tennant, J. P., Crane, H., Crick, T., Davila, J., Enkhbayar, A., Havemann, J., ... Vanholsbeeck, M. (2017). A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research, 6, 1151. https://doi.org/10.12688/f1000research.12037.3

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879

Vargas, M., et al. (2025). Understanding generative AI output with embedding models. Science Advances. https://doi.org/10.1126/sciadv.adx4082

Varella, M. D. (2022). Why do states protect the environment? Cultural diversity in international environmental lawmaking. Veredas do Direito, 19(44). https://doi.org/10.18623/rvd.v19i45.2153

Vuong, Q.-H. (2018). The (ir)rational consideration of the cost of science in transition economies. Nature Human Behaviour, 2, 5–8. https://doi.org/10.1038/s41562-017-0281-4

Vuong, Q.-H. (2020). Reform retractions to make them more transparent. Nature 582 (World View), 149. https://doi.org/10.1038/d41586-020-01694-x

Walters, W. H., & Wilder, E. I. (2023). Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Reports, 13, 14045. https://doi.org/10.1038/s41598-023-41032-5

Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., ... Mons, B. (2016). The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3, 160018. https://doi.org/10.1038/sdata.2016.18

Downloads

Publicado

2026-01-05

Como Citar

Tuyen, T. H. (2026). RESPONSIBLE USE OF AI-GENERATED CONTENT IN VIETNAMESE SCHOLARLY PUBLISHING: EVIDENCE FROM JOURNAL POLICIES AND EDITORIAL PRACTICES. Veredas Do Direito , 23, e234029. https://doi.org/10.18623/rvd.v23.n1.4029