Policy on the use of AI and technologies supporting it

General provisions

This policy sets out the principles for the responsible use of generative artificial intelligence in the preparation, submission, review and editorial processing of scientific manuscripts. The document is consistent with international standards of academic integrity recommended by the Committee on Publication Ethics (COPE). The policy aims to ensure transparency, accuracy and trust in scientific results obtained with the participation of digital technologies.

Authors’ Use of Generative AI

Authors may use generative AI tools as an auxiliary tool, for example, for language correction, stylistic editing, technical translation, clarification of formulations or text structuring. Generative AI may also be used for code analysis, calculations and statistical operations, provided that the results are verified by the author. The creation of auxiliary illustrative materials is allowed, but authors bear full responsibility for their accuracy.

At the same time, generative AI may not be used to create fictional data, quotes, literary sources or artificially generated “research results”. It is prohibited to use generative models to generate content that is presented as original scientific contribution without the actual participation of the author. It is also not allowed to upload confidential, unpublished or protected data to AI tools.

Disclosure of Generative Artificial Intelligence

AI tools cannot meet the requirements for authorship because they cannot take responsibility for the work they submit. As non-legal entities, they cannot claim to have or not have conflicts of interest, and they cannot be governed by copyright and licensing agreements.

Authors who use AI tools in writing the manuscript, producing images or graphical elements of the manuscript, or in collecting and analyzing data should be transparent by disclosing in the Materials and Methods (or similar section) of the manuscript or in the Declaration of Use of Generative Artificial Intelligence section how the AI ​​tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts that are generated by an AI tool, and are therefore liable for any violation of publication ethics.

Use of Generative Artificial Intelligence by Reviewers

Reviewers are prohibited from passing manuscript content to third-party generative artificial intelligence models, as this violates the confidentiality principle set out by COPE. Reviews should not be generated entirely or substantially by artificial intelligence tools. Only the technical use of such tools is permitted to improve the language or structure of the review’s own text without disclosing the content of the manuscript.

Use of Generative AI by the Editorial Team

The Editorial Team may use generative AI tools for technical and administrative tasks, such as improving the language quality of supporting communications or automating organizational processes. The Editorial Team shall not use such tools to make editorial decisions, change the content of manuscripts, or process confidential materials without the Editor’s control.

Editors and reviewers should disclose, both to authors and to each other, any use of chatbots for manuscript evaluation and review and correspondence. Editors and reviewers are responsible for any content and quotes generated by a chatbot. They should be aware that chatbots store the prompts sent to them, including the content of the manuscript, and that providing an author’s manuscript to a chatbot violates the confidentiality of the manuscript submitted for publication.

Editors should, where possible, use appropriate tools to help them detect content created or modified by AI. Such tools should, whenever possible, be used by editors for the benefit of science and the public.

Authors’ Responsibility

Authors are fully responsible for all scientific results, data, statements, citations and interpretations presented in the manuscript, regardless of whether generative artificial intelligence was used. If artificial intelligence tools have caused inaccuracies, errors or violations, the responsibility for their elimination or consequences remains with the authors. In cases of policy violations, the editorial board reserves the right to request revisions, reject the manuscript or apply ethical procedures in accordance with the COPE recommendations.

Policy Updates

The policy may be reviewed and updated in accordance with the development of generative artificial intelligence technologies, improvements in international standards and changes in scientific communication practices.