AI and Automated Tools Usage

The Journal has policies on the responsible use and disclosure of GenAI tools in research and publishing, emphasising transparency and author accountability, responsibility, and content approval.
 
Artificial intelligence (AI) is the usage of digital technologies, such as algorithms, machine learning, and large language models, to create systems or tools capable of performing tasks that typically require human intelligence, including data analysis, language generation, pattern recognition, and decision making.
Generative (GenAI) is an Artificial intelligence systems that use models trained on large datasets, including large language models, to create new content such as text, images, code, audio, video, and other media.

From Authors:

  • Automated tools cannot be credited as Authors. AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements. This means that the Author, not AI, must form the idea of the scientific research, the concept, outline the structure of the research, form the conclusions, and also formulate the manusript according to the IMRaD structure (Introduction, Methods, Results, Discussion, etc.).

  • Authors are responsible for checking the validity of the output of any automated tools used in their research and preparing their manuscript.

  • Authors should disclose the use of generatve AI in preparing the paper, beyond straightforward language correction, editing and formatting.

  • Generative AI cannot be cited as a source. Authors must indicate the extent of AI usage in the Paper after the Conclusions in the Declarations section. Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Declarations section and Materials and Methods section of the paper how the AI tool was used and which tool was used.

  • Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.

From the Journal:

  • Peer reviewers and editors should not use generative AI to create their assessments, due to risks such as breaches of confidentiality, superficial and non-specific feedback, bias, hidden prompts, and false information such as fake references; editing and rewriting may be acceptable if disclosed.

  • Any routine use of automated tools by the journal or publisher should be disclosed. The tool should have been appropriately tested.

  • Use of automated tools should be overseen by humans (human in the loop). The journal ensures that an editor or other staff will verify the automated detection of integrity issues such as text similarity, figure manipulation or duplication, or undeclared generative AI use, or automated peer reviewer suggestions.