Skip to Main Content

Subject guides

AI: AI Limitations

A guide to using AI in your academic studies.

Thinking critically

AI can be a very useful tool for your studies and assignments, but it is not a shortcut. As with anything, its important to think critically about what you are using and the information that it gives you. Some tips to get you started include;

  • Why are you using AI?

AI can help you in several ways; summarising articles, finding useful links between articles, and helping with writing and grammar. However it should not be used to actually generate work for you; use it as a starting off point for ideas, to generate talking points and themes that you can explore and research in more detail.

  • What tool are you using?

There are different AI tools that do different things; some generate text, some generate images etc. There are also varieties of the same sort of tool; for example ChatGPT and Google Bard. So not only consider if you're using the right type of tool for what you need, but the right program. You can research things like the right way to use it, what are the positives, any comparisons, and any potential limitations. For example; Chat GPT currently only has access to info up till January 2022, whereas Google Bard runs to the present day.

This video from from students at the University College London has real world perspectives on the use and limitations of AI for study and how to use it critically.

Limited knowledge

AI is constantly improving and updating, but it does come with limitations. Examples of limitations include;

  • Prompt dependent - it can be tricky to get the right prompt to get the answer that you need when using chatbots. They do not ask qualifying questions to confirm meaning, and slight variations in the wording can lead to very different results.
  • Questionable data - Generative AI is limited by the information put into it, which it harvests to generate results. Thus the information it gives out may be out of date, biased, untrustworthy or inaccurate in some way, based on what data it is working with. 
  • Lack of human understanding - AI can struggle to grasp cultural emotion, context and nuance that a human might be able to grasp more easily. Thus while it can give helpful pointers in summarising an article, it may miss something a human would not. So make sure to read it as well!

Bias

AI is heavily dependant on the data it is being fed to generate the result; they are essentially trained on existing text, images and other material that appears online. Thus if there is existing bias, such as sexist, racist, homophobic,  xenophobic or political content being harvested, these may be reproduced in the final results. 

Fake responses and hallucinations

A 'hallucination' in AI terms is "a plausible but false or misleading response generated by an artificial intelligence algorithm" (Merriam Webster).

This means that even though the information or text generated may sound plausible, it can be misleading, or simply wrong.

One of the main ways we see this at the moment is the creation of false references. LLM's such as ChatGPT, when asked to write academic work, often simply invent citations and references that don't exist in the real world. These will get picked up on by tutors and Turnitin.

  • ANY facts generated by an AI tool, please make sure to double check.
  • Any references generated or suggested, check them through the library catalogue or Google Scholar to ensure they are genuine. If there is any doubt at all, contact us at library.hud.ac.uk