Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Campus Update
Case Report
Case Series
Concept Paper
Editorial
Guest Editorial
Journal Watch
Medi-Quiz
Original Article
Review Article
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Campus Update
Case Report
Case Series
Concept Paper
Editorial
Guest Editorial
Journal Watch
Medi-Quiz
Original Article
Review Article
View/Download PDF

Translate this page into:

Editorial
17 (
2
); 45-46
doi:
10.25259/BJKines_22_2025

Unfair practice of using AI in writing manuscripts - A new challenge for the editorial team!

Department of Immunohematology & Blood Transfusion, B.J. Medical College and Civil Hospital, Ahmedabad, Gujarat, India.

*Corresponding author: Dr Nidhi Bhatnagar, Department of Immunohematology & Blood Transfusion, B.J. Medical College and Civil Hospital, Ahmedabad, Gujarat, India. bhatnagarnidhi@ymail.com

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Bhatnagar N. Unfair practice of using AI in writing manuscripts - a new challenge for the editorial team!. BJKines - Natl J Basic Appl Sci. 2025;17:45-6. doi: 10.25259/BJKines_22_2025

Generative artificial intelligence (AI) has rapidly emerged as a transformative force in academic writing, offering tools that can assist with language refinement, structure, and efficiency in manuscript preparation. The rapid adoption of large language models (LLMs) and Generative AI Tools (e.g., ChatGPT) for drafting and revising research papers has raised urgent ethical concerns in medical publishing, including issues related to academic integrity, copyright, and accountability.1 Major editorial organizations, such as the Committee on Publication Ethics (COPE), the International Committee of Medical Journal Editors (ICMJE), and the World Association of Medical Editors (WAME), as well as major journals, have begun issuing directives to help journals navigate this evolving environment.1,2 This editorial is a cautionary call against the unfair practice of submitting AI-generated or heavily AI-rewritten manuscripts without full disclosure.

Authorship and accountability in the age of AI

The foundation of scholarly publishing rests on a simple premise: authors must be answerable for what appears under their name. AI tools cannot do this. Both COPE and the ICMJE explicitly declare that AI tools are not authors and cannot be listed as such. COPE notes that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work.”3 Likewise, the ICMJE now instructs journals to “require authors to disclose” any use of AI in the manuscript and expressly states that “Generative AI Tools (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work.4

Thus, any text produced by AI must be attributed to its human user, not to the AI itself. When an individual submits an article largely produced by an AI system without acknowledging its contribution, it becomes a form of intellectual misrepresentation. Editors and reviewers would be misled about who conceived the ideas, analysis, and arguments. In effect, such undisclosed AI use mimics ghost authorship or plagiarism.5 AI-enabled writing has been termed “AIgiarism,” a new form of plagiarism whereby authors present AI-generated text as their own.

Threats to scientific integrity and misrepresentation

Beyond authorship, undisclosed AI writing threatens the integrity of scientific content. Generative AI can fabricate data, cite nonexistent papers, or recycle stock phrases, all of which undermine validity. For example, one experiment had ChatGPT peer-review a manuscript: it produced a plausible summary but then supplied detailed feedback that was irrelevant or incorrect, and even returned references to articles that do not exist.2 If such AI hallucinations find their way into submitted manuscripts unchecked, readers and editors may be misled.

Worse, AI may enable a flood of superficially similar or trivial papers. A recent analysis of COVID-related health studies found a surge of papers using the same public dataset, the National Health and Nutrition Examination Survey (NHANES), in almost identical ways, exhibiting remarkably similar structures and analytical approaches, a pattern described as a “paper mill.”1 This explosion coincided with the emergence of generative AI tools, suggesting AI may be facilitating mass-produced science articles.

Editorial challenges and detection limitations

Distinguishing AI-written from human-written manuscripts is extremely challenging. Current detection tools are still crude and error-prone. For instance, one review noted that AI detectors often struggle with accuracy and bias, with known tendencies to misclassify human-written content as AI-generated. Another authority warns that AI detection software often does not have the necessary reliability to flag AI use definitively.5 Indeed, a recent case saw the same manuscript evaluated by two detectors: one flagged it as AI-written while the other did not.6 False positives (human text flagged as AI) and false negatives (AI text passing as human) are common. This means editorial teams cannot rely solely on automated tools to police AI misuse. Instead, experienced editors and peer reviewers must maintain vigilance: looking for unusually polished prose, generic phrasing, suspiciously compact literature reviews, or erroneous references (hallmarks of LLM output).

In practice, this raises the bar for editorial scrutiny. Journals may need new workflows, requiring authors to declare AI use explicitly, using specialized editorial checklists, and training editors to spot red flags. However, even the best human eyes can miss clever AI paraphrasing. The result is a cat-and-mouse game: as AI tools improve, detection grows harder.

Policies, vigilance, and the path forward

The publishing community is responding. Many journals now mandate that authors report any AI use in their cover letters and manuscripts. For example, Journal of the American Medical Association (JAMA) and its sister journals have inserted submission queries such as “Did you use AI, a language model, machine learning, or similar technologies to create or assist with creation or editing of any of the content in this submission?”2 Authors who answer “yes” must describe precisely what content was AI-generated and identify the tool.

Journals are advised to update “Instructions for Authors” to forbid undisclosed AI use and require statements of AI contribution. Some recommend publishing AI policies as editorials to raise community awareness.2 Simultaneously, publishers and technologists are developing better detection algorithms and cross-checks. However, given current limitations, the ultimate safeguard is a culture of ethics: authors and co-authors must value originality and honesty above convenience.

CONCLUSION

The use of generative AI in scientific writing is a double-edged sword: it can improve clarity and efficiency, but it also creates novel avenues for misconduct. Undisclosed AI authorship and rewriting can distort authorship credit, facilitate plagiarism, and dilute the trustworthiness of medical literature. Editorial teams face a new challenge in policing these practices. Clear policies, improved tools, and, above all, individual integrity are needed to ensure AI remains a helpful assistant rather than an unfair shortcut. Journals should reinforce “human accountability” and transparent disclosure to guide the ethical use of AI and protect the integrity of science.

We therefore call on researchers and medical authors to exercise personal responsibility. Every author should verify that any AI-assisted text truly reflects their ideas and data. If AI is used (even for editing or translation), it should be acknowledged. Authors must ensure proper citation of any content (text or images) generated with AI and double-check all facts and references. Peer reviewers, likewise, should be alert to signs of AI-crafting and call out any misuse.

References

  1. . Defining the boundaries of AI use in scientific writing: a comparative review of editorial policies. J Korean Med Sci [Internet]. 2025;40:e187. Available from: https://jkms.org/DOIx.php?id=10.3346/jkms.2025.40.e187 Last accessed 5 December 2025
    [CrossRef] [PubMed] [Google Scholar]
  2. , , . Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA [Internet] 2023:330-702. Available from: https://jamanetwork.com/ Last accessed 5 December 2025
    [CrossRef] [PubMed] [Google Scholar]
  3. . Responsibility is not required for authorship. J Med Ethics [Internet]. 2025;51:230-2. Available from: https://jme.bmj.com Last accessed 5 December 2025
    [Google Scholar]
  4. . Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals [Internet] Available from: https://www.icmje.org/icmjerecommendations.pdf Last accessed 5 December 2025
    [Google Scholar]
  5. . Undeclared AI-assisted academic writing as a form of research misconduct In: Science Editor [Internet]. Available from: https://www.csescienceeditor.org/article/undeclared-ai-assisted-academic-writing-as-a-form-of-research-misconduct/ Last accessed 5 December 2025
    [CrossRef] [Google Scholar]
  6. . COPE position: authorship and AI tools [Internet] Available from: Last accessed 5 December 2025
    [Google Scholar]
Show Sections