Artificial Intelligence

Policy on the Use of Artificial Intelligence (AI) in Scholarly Publishing

Purpose
The International Journal of Health and Rehabilitation (IJHR_ORG) acknowledges the emerging role of Artificial Intelligence (AI) tools in the research and publication process. This policy establishes clear ethical principles, disclosure requirements, and guidelines for the responsible use of AI technologies by all parties—authors, reviewers, and editors—to preserve the integrity, transparency, and human accountability that are fundamental to scholarly work.

Scope
This policy applies to the use of AI and Large Language Model (LLM) tools (e.g., ChatGPT, Claude, GPT-4, Bard, and other machine learning-based systems) in the context of manuscript preparation, peer review, editorial evaluation, and published content within IJHR_ORG.

1. Authorship, Attribution, and Accountability

  • AI is Not an Author: AI tools, chatbots, or LLMs cannot fulfill the criteria for authorship as defined by the ICMJE, which require accountability for the work. Therefore, AI shall not be listed as an author or co-author on any submission.

  • Mandatory Disclosure: Authors must transparently disclose the use of AI-assisted technologies in their manuscript within a dedicated section (e.g., "Declaration of Generative AI in Scientific Writing"). This disclosure must specify:

    • The name and version of the AI tool used.

    • The specific aspects of the work where AI was employed (e.g., language editing, literature search support, data analysis, image creation).

    • The extent of human oversight, verification, and editing applied to the AI-generated content.

  • Ultimate Human Responsibility: The corresponding author assumes full responsibility for the entire content of the manuscript, including any portions developed with AI assistance. Authors are accountable for ensuring the accuracy, originality, and ethical compliance of all content, verifying cited sources, and correcting any factual inaccuracies or biases introduced by AI.

2. Preparation of Manuscripts and Originality

  • Intellectual Input: AI should be used as an assistive tool to enhance human work, not to replace critical intellectual tasks such as hypothesis generation, experimental design, data interpretation, and the formulation of original conclusions.

  • Plagiarism and Integrity: Text, ideas, or data generated by AI must be rigorously reviewed and edited by the authors. AI-generated content is not inherently original and may reproduce biased or unverified information from its training data. All submissions are subject to plagiarism screening, and authors are responsible for ensuring the final manuscript represents their own scholarly contribution.

3. Peer Review and Editorial Process

  • Human-Centric Review: The journal's double-blind peer-review process is fundamentally a human scholarly exercise. AI tools must not be used by reviewers to generate or substantially draft their assessment reports. Reviewers must provide their own expert judgment and critique.

  • Confidentiality: The use of AI tools by reviewers or editors that involves uploading or sharing confidential manuscript content to third-party, unsecured platforms is strictly prohibited, as it constitutes a breach of peer-review confidentiality.

  • Editorial Decisions: Final editorial decisions must be made by human editors based on human peer-review recommendations. AI may be used administratively to screen for technical compliance, but not to make substantive judgments on a manuscript's scientific merit.

4. Research Involving AI as a Methodological Tool
For manuscripts where the development or application of an AI/ML algorithm is the primary subject of the research (e.g., a novel diagnostic algorithm), the following must be provided to ensure reproducibility and transparency:

  • A detailed description of the AI methodology, including model architecture, training data sources, and validation procedures.

  • A statement on ethical review for the data used to train and test the model.

  • Where possible, public availability of code and, subject to ethical constraints, access to the underlying datasets or a representative sample.

5. Addressing Bias and Ethical Risks
Authors, reviewers, and editors must be cognizant that AI tools can perpetuate or amplify societal biases present in their training data. When using AI, parties should actively consider potential biases in content generation, data analysis, or language, and take steps to identify and mitigate them.

6. Enforcement and Non-Compliance
Failure to disclose the use of AI appropriately, or misuse of AI that compromises the integrity of the submission or review process (e.g., breach of confidentiality, submission of wholly AI-generated manuscripts), will be treated as a serious ethical issue. Consequences may include immediate rejection of the manuscript, retraction of a published article, and notification of institutional authorities in cases of misconduct.

7. Policy Review
This policy will be reviewed and updated regularly by the editorial board to reflect the rapid evolution of AI technologies and emerging best practices in academic publishing.

By adhering to this policy, IJHR_ORG aims to harness the potential benefits of AI as an assistive technology while steadfastly upholding the core values of authorial responsibility, scholarly originality, and transparent communication that define credible scientific discourse.