Artificial Intelligence Policy

Generative Artificial Intelligence (AI) tools, such as large language models (LLMs) or multimodal models, continue to develop and evolve, including in their application for businesses and consumers.

Universitas Tadulako welcomes the new opportunities offered by Generative AI tools, particularly in: enhancing idea generation and exploration, supporting authors to express content in a non-native language, and accelerating the research and dissemination process.

Universitas Tadulako is offering guidance to authors, editors, and reviewers on the use of such tools, which may evolve given the swift development of the AI field. Generative AI tools can produce diverse forms of content, spanning text generation, image synthesis, audio, and synthetic data. Some examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, etc.

While Generative AI has immense capabilities to enhance creativity for authors, there are certain risks associated with the current generation of Generative AI tools.

Some of the risks associated with the way Generative AI tools work today are: 

  1. Inaccuracy and bias: Generative AI tools are statistical in nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called hallucinations), or bias, which can be hard to detect, verify, and correct. 
  2. Lack of attribution: Generative AI is often lacking the standard practice of the global scholarly community of correctly and precisely attributing ideas, quotes, or citations.
  3. Confidentiality and Intellectual Property Risks: At present, Generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.
  4. Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g., for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others.

AUTHORS
Authors may use generative AI tools (e.g., ChatGPT, GPT models) for specific tasks, such as enhancing the grammar, language, and readability of their manuscripts. However, authors remain responsible for the originality, validity, and integrity of their submissions. When using generative AI tools, authors must do so responsibly and adhere to our journal's editorial policies on authorship and publication ethics. This responsibility encompasses reviewing outputs from any AI tools and ensuring the accuracy of the content.

The Universitas Tadulako endorses the responsible use of generative AI tools, ensuring that high standards of data security, confidentiality, and copyright protection are maintained in instances such as the following:

-       Idea generation and idea exploration

-       Language improvement

-       Interactive online search with LLM-enhanced search engines

-       Literature classification

-       Coding assistance

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research, and validation, and is created by the author.

Generative AI tools should not be credited as authors, as they are unable to assume responsibility for the content submitted or to manage copyright and licensing agreements. Authorship necessitates accountability for the content, consent to publication through a publishing agreement, and the provision of contractual assurances regarding the integrity of the work, among other essential principles. Generative AI tools cannot fulfil these uniquely human responsibilities.

Authors must clearly acknowledge any use of generative AI tools in their articles by including a statement that specifies the full name of the tool (along with its version number), how it was used, and the reason behind it. For article submissions, this statement should be placed in either the Methods or Acknowledgements section. This transparency allows editors to assess the employment and responsible use of generative AI tools. The IAES will maintain discretion over the publication of the work to ensure that integrity and guidelines are upheld.

If an author intends to use an AI tool, they must ensure that it is suitable and robust for their intended purpose. Additionally, they should verify that the terms associated with such a tool offer adequate safeguards and protections, particularly concerning intellectual property rights, confidentiality, and security.

Authors should avoid submitting manuscripts that use generative AI tools in ways that compromise fundamental researcher and author responsibilities, for example:

-       Text or code generation without rigorous revision

-       Synthetic data generation to substitute missing data without a robust methodology

-       Generation of any content that is inaccurate, including abstracts or supplemental materials

These types of cases may be subject to editorial investigation.
Universitas Tadulako currently prohibits the use of generative AI to create and manipulate images and figures, as well as to generate original research data, for inclusion in our publications. The term “images and figures” encompasses pictures, charts, data tables, medical imagery, image snippets, computer code, and formulas. “Manipulation” refers to augmenting, concealing, moving, removing, or introducing specific features within an image or figure.

Human oversight and transparency must consistently inform the use of generative AI and AI-assisted technologies throughout every stage of the research process. Research ethics guidelines are continually revised to reflect advances in generative AI technologies. The Universitas Tadulako will continue to update our editorial guidelines as both technology and ethical standards in research continue to develop.