Editorial Policies on the Use of AI

Editorial Policies on AI Use Adopted by RATIO JURIS

Ratio Juris adheres to international ethical principles regarding the use of AI in scientific publishing. Authors must disclose any use of these tools, assuming full responsibility for their content. Given the rapid evolution of technology, these policies will be periodically reviewed to ensure their relevance.

Guidelines for Disclosure and Use of AI Tools

Authors must explicitly declare the use of AI tools, specifying:

  • The name of the tool.
  • How it was used.
  • Clarifying that AI systems cannot be listed as authors under any circumstances.

These guidelines align with the policies of major publishers and organizations, including Elsevier, Springer, Wiley, Cambridge, ACS, Emerald, AIP, IEEE, AAAS, COPE, WAME, JAMA, Taylor & Francis, and Science:

Topic

Common Policies

AI Authorship

Universal prohibition: AI tools (LLMs, chatbots) cannot be listed as authors/co-authors (Elsevier, Springer, Wiley, Cambridge, ACS, Emerald, AIP, IEEE, AAAS, COPE).
- Authorship entails legal and ethical responsibility, which is exclusive to humans (COPE, WAME, JAMA).

Disclosure of Use

Mandatory transparency: AI use must be declared in:
• Methods/Materials (Springer, COPE, JAMA).
• Acknowledgments (ACS, IEEE, Taylor & Francis).
Required details: Name, version, manufacturer, and prompts used (JAMA, WAME).

AI-Generated Content

Images: Prohibited unless part of research methodology (Elsevier, Springer, AAAS).
Text: Permitted only with explicit disclosure; prohibited in Science/AAAS except under special circumstances.

Accountability

- Authors are responsible for the accuracy, originality, and ethics of all content, including AI contributions (All publishers).
- Must ensure absence of plagiarism and validate data (Cambridge, ACS).

Exceptions

- Permitted if AI is the research subject (e.g., methodological studies in Elsevier/JAMA).
- Basic image adjustments (brightness/contrast) do not require disclosure (Elsevier).

Table adapted and described based on information from Elsevier, Springer, Wiley, Cambridge, ACS, Emerald, AIP, IEEE, AAAS, COPE, WAME, JAMA, Taylor & Francis, and Science.

Authorship Criteria Based on the CRediT Taxonomy

Only human contributors may be listed as authors, in accordance with their role in the research and writing process, as defined by the Contributor Roles Taxonomy (CRediT):

Taxonomic Category

Role Description

Conceptualization

Development of core ideas; formulation of legal research questions; hypothesis/thesis proposal.

Methodology

Design of legal methodology (dogmatic, hermeneutic, comparative, socio-legal, etc.); analytical framework.

Software

Development of legal software, normative analysis systems, legislative databases, or legal simulators.

Formal Analysis

Application of analytical techniques (jurisprudential review, doctrinal analysis, legal statistics, normative mapping).

Investigation: Experimentation

Execution of case studies, file reviews, empirical analysis, interviews, or legal fieldwork.

Investigation: Data Collection

Gathering legal documents, comparative legislation, court rulings, doctrines, or testimonies.

Resources

Provision of legal materials, normative databases, specialized literature, or access to archives.

Data Curation

Organization, systematization, and annotation of legal documents for analysis.

Writing – Original Draft

Drafting the manuscript (introduction, theoretical framework, argumentation, conclusions).

Writing – Review & Editing

Critical revision; contributions to legal precision, citations, and argumentative structure.

Visualization

Creation of comparative tables, normative maps, legal infographics, etc.

Supervision

Academic oversight; validation of theoretical-methodological approach.

Project Administration

Research coordination, task organization, and editorial compliance.

Funding Acquisition

Securing institutional/external funding for the research.

Table adapted from the Contributor Role Taxonomy (CRediT, 2025).

Editorial Adaptation by: José Fernando Valencia Grajales, based on the cited

References.

  1. AIP Publishing. (2024). Policy on AI-generated content, https://publishing.aip.org/resources/researchers/policies-and-ethics/authors/
  2. American Chemical Society (ACS). (2024). AI in publishing: The ghost writer in the machinehttps://axial.acs.org/publishing/ai-in-publishing-the-ghost-writer-in-the-machine
  3. Cambridge University Press. (2024). AI contributions to research content. https://www.cambridge.org/core/services/authors/publishing-ethics/research-publishing-ethics-guidelines-for-journals/authorship-and-contributorship/fai-contributions-to-research-content
  4. COPE, Committee on Publication Ethics (2023). Authorship and AI tools Position statement: Authorship and AI toolshttps://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
  5. COPE, Committee on Publication Ethics (2023). Artificial intelligence and authorship. https://publicationethics.org/news-opinion/artificial-intelligence-and-authorship  
  6. NISO, Organización Nacional de Estándares de Información, (2025) CRediT role descriptors. https://credit.niso.org/contributor-roles-defined/
  7. Elsevier. (2024). The use of AI and AI-assisted technologies in writing for Elsevierhttps://www.elsevier.com/about/policies/publishing-ethics-books/the-use-of-ai-and-ai-assisted-technologies-in-writing-for-elsevier
  8. Emerald Publishing. (2024). Emerald Publishing’s stance on AI tools and authorshiphttps://www.emeraldgrouppublishing.com/news-and-press-releases/emerald-publishings-stance-ai-tools-and-authorship
  9. Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. (2023). Nonhuman  “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA. 2023;329(8):637–639. doi:10.1001/jama.2023.1344 https://jamanetwork.com/journals/jama/fullarticle/2801170
  10. IEEE. (2025). Guidelines for artificial intelligence (AI)-generated texthttps://conferences.ieeeauthorsenter.ieee.org/author-ethics/guidelines-and-policies/submission-policies/
  11.  AAAS, American Association for the Advancement of Science (2024). About the (AI)² Initiative. https://www.aaas.org/ai2/about
  12. ScienceAdviser (2024). Editorial policies: AI-generated contenthttps://www.science.org/content/page/science-journals-editorial-policies#:~:text=ln%20addition%2C%20a
  13. Springer Nature. (2024). Artificial intelligence (AI) editorial policy. https://www.springer.com/de/editorial-policies/artificial-intelligence--ai-/25428500
  14. Taylor & Francis. (2024). Responsible use of AI tools in academic content creationhttps://authorservices.taylorandfrancis.com/editorial-policies/#use-of-ai
  15. WAME, World Association of Medical Editors (2023). Chatbots, Generative AI, and Scholarly Manuscripts. https://wame.org/page3.php?id=106
  16. Wiley. (2025). Ethics guidelines: Use of generative AIhttps://authorservices.wiley.com/ethics-guidelines/index.html#use-of-ai