This is the third post in a series exploring the nine areas of AI ethics outlined in this original post. Each post goes into detail on the ethical concern and provides practical ways to discuss these issues in a variety of subject areas. For the previous post on Environmental concerns, click here.
The concept of “truth” is a significant ethical issue related to AI systems like ChatGPT. Since its launch in November, there have been two main concerns: first, the likelihood of AI models generating false or fabricated content, and second, the potential for individuals to exploit them for dishonest purposes, including academic cheating and intentional dissemination of false information.
In this post, I’ll explore both the AI tendency to fabricate information, and the various ways which humans might misuse the technology.
Here’s the original PDF infographic which covers all nine areas:
The first of these concerns – called “hallucinating” – is a result of many different factors, including:
- Training data limitations: The models are trained on large datasets containing text from various sources, which may include inaccuracies, biases, or outdated information.
- Inability to verify facts: AI language models lack the ability to fact-check or verify information. They rely on patterns and associations found in their training data and may generate false information if the data contains inaccuracies.
- Ambiguity in prompts: If a user provides a vague prompt, the AI model might generate responses that are not accurate. The model tries to infer the user’s intent based on the given input, but it might fail to do so correctly.
- Over-optimising for fluency: AI models like GPT-4 are designed to generate human-like text, which can sometimes lead to them prioritising fluency over accuracy. As a result, the model may produce text that sounds plausible but is a hallucination.
- Lack of a ground truth: AI language models don’t possess a deep, grounded understanding of the world like humans do. They work based on statistical patterns in data, which can sometimes lead to generating information that doesn’t make sense or is incorrect.
In addition to perpetuating biases and discrimination, AI hallucinations pose a genuine risk of causing harm. The convincingly fabricated information produced by these models can infiltrate media, academic research, and educational materials. An inattentive user might unintentionally incorporate this false content into various contexts, creating further issues.
Academic integrity in the age of AI
AI has become a potential threat to academic integrity, as tools like ChatGPT make it easier for students to access and use generated content for cheating purposes. The ease of generating human-like content could tempt students to bypass the hard work of research and writing.
Since ChatGPT’s launch in November, there has been a surge of media speculation about AI-fuelled academic dishonesty. Journalists and educators have raised concerns over the increasing difficulty of identifying AI-generated content and the possibility of it slipping through detection tools.
As we continue to develop and rely on AI systems, the responsibility falls on educators, institutions, and AI developers to create a culture that emphasises the importance of truth and the ethical use of technology. This may involve updating academic policies, providing training on AI ethics, or developing more effective tools to detect AI-generated content.
AI and Truth
Misinformation, disinformation, and mal-information are false or misleading pieces of information that spread through social media, news outlets, or word of mouth, often causing confusion and harm.
AI has become an accomplice in the viral spread of these types of information. Deepfakes – AI-manipulated videos and images – can deceive users with startling accuracy, making it harder to distinguish between fact and fiction. Platforms like TikTok have become incredibly problematic for the spread of misinformation, with algorithms powered by AI creating “filter bubbles” that expose users only to information that confirms their pre-existing beliefs, further amplifying false narratives.
Case Study: Language models and the spread of fake news
Research conducted by Georgetown University, OpenAI, and Stanford Internet Observatory (SIO) highlights the dangers of large language models (LLMs) and the potential for them to manipulate public viewpoints.
LLMs are trained on vast amounts of textual data and can generate meaningful text similar to humans. They are commonly used for tasks such as content creation, language translation, and text summarisation, due to their ability to generate good-quality text at scale.
The concern is that LLMs can be used to produce fake news and impersonate real individuals or organisations. The researchers used the ABC model of disinformation to examine how LLMs can be misused. The model breaks down the various aspects that contribute to the escalation of false information. ‘A’ refers to ‘Actor,’ which can be a group of individuals who create and broadcast disinformation. ‘B’ stands for ‘Behaviour,’ which refers to the strategies used to spread propaganda. Lastly, ‘C’ stands for ‘Content’ which is untrue information.
The research found that LLMs can be used to promote a hoax agenda and negatively influence people. Since LLMs can generate large amounts of text quickly, they can overflow the internet with false information, making it difficult for people to differentiate between what is true and what is wrong. Even the scale of the campaigns can be amplified with minimal costs, making manipulation harder to detect.
The researchers recommend that careful attention should be paid to the type and source of news to avoid misuse, and users and developers should ethically use the model. While LLMs are not inherently malicious, they have the potential to be wrongly used for manipulation and disinformation.
Teaching AI Ethics
Each of these posts will expand on the original and offer a few suggestions of how and where AI ethics could be incorporated into your curriculum. Every suggestion comes with a resource or further reading, which may be an article, blog post, video, or academic article.
- History: How might the use of AI impact historical research? How can we ensure academic integrity and prevent the spread of fake historical information?
- English: What role does AI play in the creation and dissemination of fake news, and how can we teach students to identify and combat it? How can we promote academic integrity in digital writing and communication?
- Mathematics: How can AI be used to detect errors in data analysis, and what ethical considerations must be taken into account?
- Computer Science: How can AI be used to detect plagiarism and in programming and computer science projects, and how can we design ethical AI systems to support academic integrity?
- Science: How can AI be used to detect and prevent scientific misconduct and academic dishonesty, or what could go wrong?
- Media Studies: How can we teach students to critically evaluate and fact-check fake news, and how can we promote academic integrity in digital media and communication?
- Visual Arts: Is AI art actually art, or is it plagiarism? How can we promote integrity in digital art creation and distribution?
The next post in this series will explore copyright and how AI text and image generation are prompting complex ethical and legal discussions about creativity. Join the mailing list for updates:
Got a comment, question, or feedback? Get in touch: