The "Technologies and Engineering" maintains strict rules regarding the use of generative AI, based on recommendations by WAME and Elsevier. The primary aim of the policy is to ensure academic integrity, prevent falsification and protect trust in published research. Generative language models cannot be listed as authors, as they cannot take legal or ethical responsibility, finalise the manuscript, or respond to reviewer feedback. Their use is limited to editorial assistance such as improving wording, grammar or clarity. Using AI to generate scientific findings, interpretations, statistics, visual materials, fabricated citations or simulated analysis is prohibited. In exceptional cases where AI forms part of an experimental method, such use must be thoroughly described in the “Materials and Methods” section.
All uses of generative AI must be clearly disclosed. Editorial assistance with language should be noted in the “Acknowledgements”, while methodological use must be described in “Materials and Methods” and, where appropriate, in the abstract. Authors must specify the tool used, its version, purpose and method of interaction. Undisclosed use of AI is considered an ethical violation.
Authors bear full responsibility for the accuracy and integrity of their work, regardless of whether AI tools were used. Reviewers are prohibited from uploading manuscripts into AI systems or generating reviews through AI. Editors may use AI-detection tools but cannot upload manuscripts to generative platforms without author consent.
Violations of this policy may result in suspension of the review process, rejection of the manuscript, institutional notification or, if discovered after publication, retraction of the article.