There is a range of risks posed by the use of the current generative AI tools, as they are new and largely unknown entities with surprising capabilities and behaviours. Below is an outline of the key limitations and concerns surrounding AI, with implications for academic study and research.
If you are unable to view this video on YouTube it is also available on YuJa - view the GAIT uses and limitations video on YuJa (University username and password required)
Producing inaccurate or false information
Because the Generative AI tools work statistically, one letter or word at a time, they can produce false information – for example, pieces of fictional news or, in academic texts, made-up citations – and they also present this false information in a very factual tone. The term for this behaviour is ‘hallucination’. For this reason, they are not reliable to provide truthful and accurate information. While their confident tone makes the information believable, you should always question the output: we must always verify AI outputs by checking independently other reliable sources.
Unidentifiable sources
Generative AI tools do not list evidence sources. In fact, what they are doing is more like mashing together content they have encountered before, anywhere in their training dataset. This lack of source traceability raises serious questions around academic and research integrity: how can we reference and credit the author and how can we assess the quality of or challenge the evidence used?
Data bias and misrepresentation
Generative AI outputs are not neutral. Their algorithms tend to reproduce the classifications, assumptions, and biases that exist within their training dataset. This means that minority voices and perspectives are likely to be underrepresented in generative AI outputs, which can obscure real world diversity. So, overreliance on AI may hold us back from reaching nuanced understandings of complex problems.
Misalignment
This term refers to how well the AI output matches the intent of the user. In some cases, the prompt is not followed to the letter or different outputs for the same prompt may diverge significantly from one another. This unreliability and inconsistency of AI means that it is up to us to use our knowledge and apply judgement to decide whether the outputs are appropriate, and which one is best!
Copyright and intellectual property
Copyright issues have been raised around the rights of individuals and creators whose products of expression were used to train the Generative AI models. These authors, artists and designers are uncredited and unpaid. How much of your internet footprint do you think may have ended up in training ChatGPT? Currently, the ethics of data use for AI training remain contested.
Authorship and attribution
AI generated outputs have raised questions about authorship (who is the author or creator?) and new potential areas of plagiarism. In the context of studying for a higher education degree, work submitted to demonstrate knowledge and skill must be one’s own – any use of Generative AI for assessment purposes must comply with institutional rules.
Privacy
Generative AI tools are being scrutinised with regards to privacy issues and data protection laws. This is because there is no way to predict which information they may disclose. We must bear in mind that we don’t have any control over what happens to any information we provide to Generative AI tools like ChatGPT – so be careful with what you share and protect your personal information.
Reinforcing harmful stereotypes
AI models are trained on existing text or images online. Like any algorithm, they are likely to repeat predominant patterns in their training dataset. They may, therefore, reproduce biased ideas that have prolific presence online, such as misogyny, racism, homophobia, or xenophobia.
Sustainability
In order to develop and maintain these tools, there is a significant cost in energy and computing resources. The justification of the infrastructure cost for the operation of these AI systems is being questioned in view of other priorities, e.g. global energy needs and the climate crisis.
Magnifying inequalities
There are questions about who is accessing these AI systems and where, who will have access in future, and who will benefit the most from their deployment and use. For example, there are already subscription-based products with advanced features compared to their open-access alternatives.
Learning
Generative AI tools have been marketed as assistants for achieving complex tasks and facilitating learning. However, as they are set up to provide ready and easy answers, they tend to draw attention to the product and not the process of learning; therefore, overreliance on these tools could disrupt independent research and learning by discovery, which is key in Higher Education. As a student, you should prioritise your independent research!
Agency and critical thinking
Potential applications of Generative AI include automation of tasks that require judgement. But to what extent should we allow AI to make decisions? This could reduce our own agency and critical ability to evaluate options and make responsible choices, and may incur the additional risk of introducing biases in decision making that go undetected. We should value, therefore, the development of our own critical thinking first.
Pace of change
Advancements in Generative AI are extremely rapid paced and this makes it difficult to keep up with the ever-changing landscape of what is possible. It can be, therefore, challenging to develop confidence in one’s understanding and ability to use these tools.
Time cost
Although promising to save a lot of time, interaction with generative AI tools may, in fact, be time consuming in other ways: it may include a lot of trial and error to find the best prompt or the best output, exploration of ever-expanding possibilities and directions. In addition, we must spend time in independent research to evidence its factual statements!
Accountability
Although it is possible for generative AI tools to be misleading or outright wrong, they are not accountable for these mistakes. Accountability, including legal liability, remains with the user! We must, therefore, always verify the content of any output before we use that output to make decisions or influence others.
Fraud
The release of Generative AI tools to the public has raised concerns about increasing the potential for misleading or fraudulent activity. Public discourse could become inundated with AI generated content or even bots impersonating humans. In addition, the potential to create more and more realistic deepfakes may erode trust in truth and evidence, and poses challenges on how to authenticate it. We must, therefore, be aware of these risks, especially when accessing information online, and always assess the reliability of our sources.