Artificial Intelligence (AI) and the literature review process: Reporting

Application of AI tools such as ChatGPT to searching and all aspects of the literature review process

Before you write a literature review report, you must be aware that using generative AI without permission is considered academic misconduct and could result in penalties. On this page we review key AI tools guidelines for BCU students and the evidence of generative AI use in academic publishing.

Overview of AI Tools BCU Guidelines

The student AI guidelines emphasise that you must submit your own original work and use AI tools only when explicitly permitted. Unauthorised AI use in assessments is considered academic misconduct, with potential penalties (see the academic misconduct procedure).

As a BCU student making decisions about artificial intelligence (AI) it is your responsibility to write responsibly. This means:

  1. Never ‘copy and paste’ directly from any sources, including AI tools (unless using a direct quotation from e.g. a journal article/book “” that is cited appropriately).
  2. Write your own work, using your own paraphrasing skills to represent source material in your own words (+ citations).

Generative AI tools that 1) create sentences, 2) predict sentences, or 3) substantively rewrite your sentences (such as Grammarly's Generative AI assistance, Wordtune, Paperpal etc.), are not permitted*. This is because their use can make it difficult for your marker to know how well you have understood the content for yourself.  Tools that write or rewrite sentences for you are also likely to be flagged by the AI detection tool and lead to a high AI score.

Therefore, it is much better to do your best in writing in your own ‘voice’ in order to demonstrate your understanding of the ideas and sources you are presenting. Do not worry about writing in ‘perfect’ academic English. Please use the Centre for Academic Success’s support (workshops, tutorials, online resources) to develop your academic writing skills.

 

Evidence

Major publishers do not allow content generated by generative AI tools such as ChatGPT to appear in scientific articles. Their guidance to authors explicitly states this and is usually along the lines of: AI tools must not be used to create, alter or manipulate original research data and results.

Elsevier's policy, for example, states that "where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work and not to replace key authoring tasks such as producing scientific, pedagogic, or medical insights, drawing scientific conclusions, or providing clinical recommendations."

Do check the guidance from any publisher to whom you might be submitting work. The peer review process and the journal submission process should identify any concerns.

Wiley, for example, when incorporating the 200 journals from Hindawi into its portfolio, discontinued 19 of these titles after it had to retract over 11,300 articles over a two year period because of concerns over unethical authorship practices, one of which was the use of AI in manuscript fabrication and image manipulation (Wiley, 2023).

However, there are numerous cases where content clearly generated by LLMs is appearing in the text of academic journals. A search of Google Scholar in September 2024 found 1,362 scientific articles in which the content contains text generated by ChatGPT (Strzelecki, 2025). He presents a table of 89 articles from peer-reviewed journals published by the most prolific academic publishers, all of which appear in Web of Science and Scopus. Izquierdo-Condoy et al. (2024) give six examples from articles published in 2024, not all of which have since been retracted.