Artificial Intelligence (AI) and the literature review process: Prompt engineering

Application of AI tools such as ChatGPT to searching and all aspects of the literature review process

The term "prompt engineering" refers to the technique of crafting effective instructions to interact with Large Language Models (LLMs).
Using the Open AI strategies, the CLEAR framework, Anthropic's guidelines and examples from prompt libraries, you can dramatically improve the output of generative AI tools.

Open AI strategies

OpenAI has created six strategies for getting better results as part of its series on GPT best practices, specifically for its most recent subscription-based product rather than its free version. It encourages you to experiment to find the methods that work best for you.

For instance, you can improve your ChatGPT results by introducing yourself in the Custom Instructions section of your profile, answering the two questions: 

  • What would you like ChatGPT to know about you to provide better responses?
  • How would you like ChatGPT to respond?

These are the strategies with the accompanying tactics:

 

Write clear instructions

  • Include details of what you want it to do and give context to how you want the response to look.
  • Ask the model to adopt a persona.
  • Ask for the reply to be in a certain style such as professional or light-hearted.
  • Ask for the reply to be in a specific format, such as flowchart, presentation, report, thank you note etc.
  • Use delimiters such as triple quotation marks, XML tags, section titles, to identify the sections of text in the prompt which should be treated differently. The more complex a task, the more important it is to provide clear task details.
  • Specify the steps required to complete a task: writing the steps out in the format: Step1, Step 2.. can make it easier for the model to follow them.
  • Provide examples.
  • Specify the desired length of the output: this works best with a specific number of paragraphs or bullet points, whereas the number of words is more approximate.

 

Example prompts:

As a librarian, create a report in a list format for librarians that outlines the top 10 AI tools that can be used for literature searching.

 

As a journalist, write a blog post to inform healthcare professionals about the practice of telemedicine. Make it light-hearted.

 

Give me famous painters in this format: name, nationality, known for, paintings, style. Put this in a table and include their dates of birth.

 

Act as my nutritionist and give me tips about a balanced Mediterranean diet.

 

Provide reference text

  • Tell it to answer using a reference text, perhaps a book or  an article. This feature may only be available on subscription.
  • Tell it to answer with citations from a reference text.

 

Example prompts:

Use the provided articles delimited by triple quotes to answer questions. If the answer cannot be found in the articles, write "I could not find an answer."

 

Answer the question using only the provided document and cite the passage(s) of the document used to answer the question

 

Split complex tasks into simpler sub-tasks

  • classify the type of query and ask the tool to decide into which classification your query falls. 
  • break down a complex task into smaller more manageable tasks. For example, use a sequence of queries to summarize each section of a large document, such as a book.

 

Example prompts:

You will be provided with customer service queries. Classify each query into a primary category and a secondary category. Primary categories: Billing, Technical Support, Account Management, or General Inquiry. Billing secondary categories: - Unsubscribe or upgrade - Add a payment method - Explanation for charge - Dispute a charge.

 

Here are the queries: my laptop is not working; I disagree with the invoice amount; my refund is not showing on my bill; Why have I been charged VAT on this product?; my document has saved to OneDrive but I don't know where?

 

Give GPTs time to think

  • Instruct the model to work out its own solution before rushing to a conclusion.
  • Where a source document is large, an AI tool can stop too early thereby not listing everything relevant. You can often get better performance by prompting the tool to check whether it missed anything else of relevance. 

 

Example prompts:

First work out your own solution to the problem. Then compare your solution to the student's solution and evaluate if the student's solution is correct or not. Don't decide if the student's solution is correct until you have done the problem yourself.

 

Are there more relevant excerpts? Take care not to repeat excerpts. Also ensure that excerpts contain all relevant context needed to interpret them - in other words don't extract small snippets that are missing important context.

 

Ask it for its chain of reasoning that led it to respond with the solution. This will allow the tool to work out a better response.

 

Use external tools

  • Give the AI tool an example as part of the prompt.
  • If an AI tool model is instructed in the proper use of an API, it can write code that makes use of it. An AI tool can be instructed in how to use an API by providing it with documentation and/or code samples. 

 

Test changes systematically

  • Evaluate the outputs from AI tools by referring to best practice answers.

 

 

The CLEAR framework

There are several frameworks that have been proposed to produce effective prompts. They follow similar lines to the guidelines above. The CLEAR framework (Lo, 2023) is one:

C for Concise

Remove all superfluous language, such as please would you tell me? Keep it clear. 

Example:

Use 2-3 sentences to explain the concept of prompt engineering to a first year university student.

 

L for Logical

Keep a logical flow of ideas within the prompt: make sure it follows a natural progression.

Example:

  • List in bullet points the steps to writing a systematic literature review
  • You don't have a limited amount of words. You can give ChatGPT as much information as you like: pages of prompts.
  • Don't use punctuation except colon or a dash. If in doubt, ask ChatGPT itself.
  • Make sure that black and white/male and female are equally represented.

 

E for Explicit

Be explicit about the output you want in terms of format, content, style. The type of output can include articles, bullet points, a flowchart, tables, poems etc.

Example:

Deliver your response as a 500-word article with a headline and a conclusion. 

 

Give it an example. In the paid version of ChatGPT, upload a pdf and ask it to give you a summary.

Examples:

 Find other articles like this one.

 

 Give me a history of this building.

 

 Here are my ingredients, what can I cook for dinner?

 

A for Adaptive.

Experiment with various prompts. Adapt the prompts to ensure a balance between creativity and concentration.

 

R for reflection.

Reflect on what the AI tool has supplied. Evaluate the relevance and the accuracy of the response. Go back to it and ask it for what you really want if the response does not meet your needs. You do not have to get it right first time. You are having a conversation, a chat, with the tool. Apply the lessons learned to future prompts.

As Dave Birss advises in the March 2023 LinkedIn Learning course, How to research and write using generative AI tools:

It's important that you don't make the mistake of thinking of AI in the same way as a search engine. With a search engine, you put in a query and it returns results. With AI, it's more of a journey. You need to refine your request to get closer to what you were after. It's a back and forth. It's a conversation. That's why ChatGPT starts with the word chat. 

Anthropic's prompt engineering guidelines

In its prompt engineering user guide, the creator of Claude recommends the following techniques:

 

 

  • Be clear & direct: Provide clear instructions and context to guide Claude's responses
  • Use examples: Include examples in your prompts to illustrate the desired output format or style
  • Give Claude a role: Prime Claude to inhabit a specific role (like that of an expert) in order to increase performance for your use case
  • Use XML tags: Incorporate XML tags to structure prompts and responses for greater clarity
  • Chain prompts: Divide complex tasks into smaller, manageable steps for better results
  • Let Claude think: Encourage step-by-step thinking to improve the quality of Claude's output
  • Prefill Claude's response: Start Claude's response with a few words to guide its output in the desired direction
  • Control output format: Specify the desired output format to ensure consistency and readability
  • Ask Claude for rewrites: Request revisions based on a rubric to get Claude to iterate and improve its output
  • Long context window tips: Optimize prompts that take advantage of Claude's longer context windows

 

Prompt engineering guide

 

 

Prompt libraries

The large language models (LLM) creators and the AI users community maintain libraries of prompt examples. Here are some selected prompts you may customise to your requirements.

Open AI's prompt examples

Keep in mind these examples were designed for Open AI models, other AI assistants might respond differently.

Open AI prompt examples