Artificial Intelligence (AI) and the literature review process: Ethics
You will find that "there is relatively little variation regarding the main identified ethical issues" (Prem, 2023: 701). The basic principles that ethical AI should meet can be categorised under five headings. For the use of AI and the searching process, the ethical issues can be considered under two of these headings, where AI must be:
- Robust and Secure
- Accuracy
- Privacy and data protection
- Reliability and reproducibility
- Quality and integrity of the data
- Safety and toxicity
- Social impact
- Fair
- Avoidance of unfair bias
- Copyright
- Society and democracy
- Auditability
Robust and secure
Accuracy
Evidence from chats with BCU's own students confirms that AI tools provide plausible but incorrect information. This hallucination effect is one of the key issues from their use. In Blanco-Gonzalez et al. (2023) only 6% of the references generated by ChatGPT were actually correct. Altmae et al (2023) confirmed that the references currently provided by ChatGPT are unreliable and need extensive revision. Only 2 of the 8 references provided by ChatGPT for their draft of a scientific article for Reproductive BioMedicine Online were correct. Do check the references created with an AI tool by using BCU Library search or Google Scholar to confirm their authenticity. Using plugins which are available through the premium version of ChatGPT (requires subscription) removes the hallucination effects of the free version.
Privacy and data protection
The key privacy issues for a user of ChatGPT are the same as for any database provider: how can I get information about how my personal data is used by ChatGPT or any other AI tool? Transparency about the use of personal data is a key principle of the UK GDPR. How can I exercise my data subject rights under UK GDPR, in connection with how my personal data is used?
How can ChatGPT and other LLMs comply with the UK GDPR principles around data minimisation, accuracy and retention, where a huge amount of personal data is used?
The Information Commissioner's Office (ICO) has provided guidance on AI and data protection but has not stated whether ChatGPT sits within the UK GDPR rules.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence came into EU law on 1st August 2024. The regulations follow the risk-based approach of the 2019 Ethics guidelines for trustworthy AI developed by the High-level Expert Group appointed by the European Commission. They lay down requirements for high-risk AI systems and the obligations for the relevant operators, and they lay down transparency obligations for certain AI systems. By August 2025, chatbots must comply with copyright law and fulfil transparency requirements like sharing summaries of the data used to train the systems. The regulations are being gradually phased in but the detail in these regulations, in the General Data Protection Regulations (GDPR) and in the Digital Markets Act has already caused Meta to state that that its new multimodal Llama AI model will not be released in Europe because of the regulatory environment. Meta has paused training its AI models with posts from Facebook and Instagram users in the EU because of concerns it may violate privacy rules. The company violated Article 46 (1) of the GDPR in 2023 by transferring personal data from the EU/EEA to the USA following the delivery of the CJEU’s judgment in Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems.
Reliability and reproducibility
Repeating the same prompt will result in different queries being generated – and their effectiveness can differ (Wang et al., 2023). This is a fundamental issue for systematic literature reviews where the searches need to be repeatable and would need to be resolved if ChatGPT, for example, were to be used.
Quality and integrity of the data
One of the criticisms of ChatGPT is that users are limited to content that hasn’t been updated since September 2021. Premium users have access to over 700 plugins created for use with ChatGPT. These plugins overcome the two most serious criticisms of ChatGPT: first, they allow ChatGPT to get access to current data. Using a plugin means that it can use data that has been produced more recently and so can give users a better response. Second it means that, because it’s using material provided by a third party it’s not going to hallucinate its answers.
You need to obtain consent when collecting data through your research. But Informed consent is not possible with AI-generated text. Obtaining consent from participants is a key part of research ethics.
Safety and toxicity
The safety properties of GPT4 were improved by OpenAI to reduce the amount of toxic responses from 6.48% of the time with ChatGPT 3.5 to 0.73% with ChatGPT4 (OpenAI, 2023).
Social impact
With ChatGPT and other AI tools, you are charged for accessing premium content. But premium raises questions of access for all/equality of access.
Fair
Avoidance of unfair bias
The LLMs are trained on data. That data may perpetuate existing biases, stereotypes and discrimination in society. AI tools learn by identifying patterns in existing data. These historical patterns are then used to create the output that you see. There are extensive examples of racist, sexist, homophobic and other discriminatory language making its way into large language models which are then generated as output. Lucy and Bamman (2021) found in their survey of fictional characters that female characters were more likely to be discussed in topics related to family, emotions, and body parts, while male characters were more associated with politics, war, sports, and crime. Their conclusion was that GPT-3 contained internally linked stereotypical contexts to gender. Multiple gender stereotypes were found in the generated narratives, emerging even when prompts do not contain explicit gender cues. Abid et al. (2021) showed that, despite the prompt asking for anti-stereotype content, GPT-3’ still provided an association of Muslims with violence.
Prompt engineering has also been used to mitigate the bias of large-scale LLMs in language generation, by designing additional prompts to guide the model to a fairer output without fine-tuning. For example, in the occupation recommendation task, the authors change GPT-4’s gender choice from a third-person pronoun to “they/their” by adding the phrase “in an inclusive way” to the prompts (Bubeck et al., 2023).
Copyright
Generative AI tools create text, images, music, video or code based on content, the source of which is not known, but is text that has already been written by others and which therefore is likely to be copyright-protected. If you are using that text/image/video to create new research or new learning material without crediting the source and/or paying for that source, there is a risk of infringing the rights of others.
BCU has several licence agreements which allow for copying of material, details of which are in the copyright guide. The Higher Education licence with the Copyright Licensing Agency (CLA), for example, allows the photocopying and scanning of licensed material and the copying, storage, communication and use of digital copies and digital material. At the same time it prohibits the copying of substantial amounts of licensed content and making licensed content available to others without permission. You need to understand what you are allowed to do under the terms and conditions of those licences. As the Jisc guidance on copyright law in the context of generative AI states "clarifying for those in education how CLA licensed materials, for example, can be used with Generative AI tools would be beneficial" (Kelly, 2024).
There are also copyright issues if you upload articles to AI tools for them to inspect and harvest information from. Publishers often retain rights over the use and distribution of articles and do not allow further reproduction which may include uploading content to AI tools.
Microsoft's agreement with Taylor & Francis, which runs until 2027, is an attempt by one LLM provider to improve the performance of its AI products such as Copilot (as reported in the Times Higher). The concern is that the chatbots will start to replicate writing to the extent that it appears to be plagiarism.
Current legal cases may provide some clarity on whether generative AI is actually copying the input used to train its models or whether no copying has occurred because the tool is finding patterns in the input which it has analysed to create new content guided by prompts provided by users.
There are several lawsuits filed in 2023 in the United States against AI tool providers for copyright infringement:
- In December 2023, The New York Times claims (see Case 1-23cv-11195) that millions of articles were used in ChatGPT's training without its permission. It claims that Microsoft and OpenAI gave New York Times content particular emphasis when building their LLMs—revealing a preference that recognizes the value of the content. It also alleges that ChatGPT will sometimes generate content verbatim from New York Times articles, which cannot be accessed without a subscription, seeking to free-ride on its investment in its journalism to build substitutive products without permission or payment. It also provides the example of Bing AI producing results taken from a New York Times-owned website, without linking to the article or including referral links it uses to generate income.
- In September 2023, the Authors' Guild and 17 authors (see Case 1-23cv-08292) are suing OpenAI for flagrant and harmful infringements of copyrights in written works of fiction, claiming that OpenAI copied their work wholesale without permission and then fed the copyrighted work into their LLMs, representing "systematic theft on a massive scale".
- In September 2023, five authors filed a lawsuit (see Case 3-23cv-04625) suing OpenAI, claiming that their copyrighted works were used in datasets to train its GPT models powering its ChatGPT product. Their claim is that, when ChatGPT is prompted, it generates not only summaries, but in-depth analyses of the themes present in their copyrighted works, which is only possible if the underlying GPT model was trained using their’ works.
- In August 2023, a lawsuit was filed by comedian Sarah Silverman and authors Richard Kadrey and Christopher Golden for alleged copyright infringement against both Meta Platforms (Facebook's parent company) and OpenAI. Meta and OpenAI moved to dismiss the authors’ claims, with OpenAI responding that the lawsuit is "failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”
- In June 2023, Mona Awad and Paul Tremblay became the first authors to sue OpenAI for breach of copyright law (see Case 3-23cv-03223) claiming that their copyrighted materials were ingested and used to train ChatGPT without their permission. The evidence is that the chatbot generates very accurate summaries of their works which it could not do without being trained on the copyrighted works themselves.
The U.S. Copyright Office (2023) is currently seeking views regarding the copyright issues raised by generative artificial intelligence. This study will gather information to analyse the current state of the law, identify unresolved issues, and evaluate potential areas for congressional action.
In the UK, the Government has been accused of avoiding taking sides on such a contentious topic as copyright because of ongoing legal cases. The letter from the House of Lords Select Committee on Communications and Digital reflects that the Government's record on copyright is "inadequate and deteriorating" and is concerned that "problematic business models are fast becoming entrenched and normalised". The House of Lords Committee, in their report on Generative AI and Large Language Models in February 2024 had recommended investment in high-quality licensed data repositories to encourage good practice. It had also made recommendations on the assumption that the Intellectual Property Office (IPO) would produce a voluntary working code but a consultation outcome A Pro-innovation Approach to AI Regulation, published in February 2024 confirmed that agreement had not been reached (clause 29). The Government response to the House of Lords report highlighted its work on progressing with a transparency mechanism for copyright holders but the letter from Baroness Stowell of Beeston, chair of the Lords' Committee wanted more action, "copyright law needs updating to ensure legal clarity".
The court cases in the United States may address legal questions such as: do I own the work created by the AI tool as I crafted the prompt? can others use any material that I upload to generative AI tools? and, if I upload someone else's content to an AI tool am I infringing copyright? Until then, if we want to engage with generative AI, as John Kelly advises in Jisc's guidance and advice on copyright law and practice in education, "we may have to live with a bit of risk and uncertainty".
Society and democracy
OpenAI’s leadership have affirmed that at some point they will have to monetize ChatGPT as the computing costs are "eye-watering" (Altman, 2022). There is concern that having to pay for access to AI tools will widen existing disparities in knowledge dissemination and scholarly publishing. This disparity may be partially offset by the introduction of AI features into Google's Search function in the UK (Vallance, 2024). This is limited to a small number of logged-in UK users who see an AI-generated "overview" at the top of some search results. Google's Search Generative Experience has been available in the United States since May 2023 through Chrome and the Google App but only to those users who signed up via Google Labs (Reid, 2023). But the disparity will not be offset if the report from 3rd April 2024 in the Financial Times is true that Google is considering charging for its AI-powered search. Google has denied this (Dempsey, 2024).
JISC's AI in Tertiary Education raises the issue of digital inequality but highlights that currently there are limited options for licensing these tools institutionally. JISC does expect this to change. There are three criteria included in the equity objective of the Ethical Framework for AI in Education (Institute for Ethical AI in Education, 2020). The equity objective states that AI systems should be used in ways that promote equity between different groups of learners and not in ways that discriminate against any group of learners.
The Department for Education issued a call for evidence on generative AI in education in the summer of 2023. In the November 2023 summary of responses, the Department recognized that pupils and students need to have a certain level of AI literacy. To harness AI's potential, they need to have the subject knowledge to draw on to ensure that the AI tool is presented with the right information and to make sense of the results that it generates. "GenAI tools can make certain written tasks quicker and easier but cannot replace the judgement and deep subject knowledge of a human expert" (Department for Education, 2023: 5-6). In certain courses in the university, the use of generative AI tools has been embedded into course content to teach students how to use and apply them. But, for the majority of students, this is not the case. AI literacy therefore represents a training opportunity for our students, especially as AI tools are expected to have a major impact on future workforce skill requirements. 74% of online 16-24 year olds in Online Nation 2023 reported they had used a generative AI tool (OfCom, 2023), a quarter of these had used it to help with their studies.
Auditability
Transparency and accountability are essential in mitigating ethical concerns using AI-generated text. The Concordat to Support Research Integrity (Universities UK, 2019) contains five elements to support research integrity and AI tools are often at odds with these elements:
- Honesty in all aspects of research, including gathering data and using and acknowledging the work of other researchers;
- Rigour, in line with prevailing disciplinary norms and standards, and in performing research and using appropriate methods; and in communicating the results;
- Transparency and open communication in the reporting of research data collection methods and in the analysis and interpretation of data;
- Care and respect for all participants in research, and for the subjects, users and beneficiaries of research;
- Accountability of funders, employers and researchers to collectively create a research environment in which individuals and organisations are empowered and enabled to own the research process. Those engaged with research must also ensure that individuals and organisations are held to account when behaviour falls short of the standards set by this concordat.