Teaching AI Ethics: Truth and Academic Integrity

This is the third post in a series exploring the nine areas of AI ethics outlined in this original post. Each post goes into detail on the ethical concern and provides practical ways to discuss these issues in a variety of subject areas. For the previous post on Environmental concerns, click here.

The concept of “truth” is a significant ethical issue related to AI systems like ChatGPT. Since its launch in November, there have been two main concerns: first, the likelihood of AI models generating false or fabricated content, and second, the potential for individuals to exploit them for dishonest purposes, including academic cheating and intentional dissemination of false information.

In this post, I’ll explore both the AI tendency to fabricate information, and the various ways which humans might misuse the technology.

Here’s the original PDF infographic which covers all nine areas:

Synthetic mirages

The first of these concerns – called “hallucinating” – is a result of many different factors, including:

  1. Training data limitations: The models are trained on large datasets containing text from various sources, which may include inaccuracies, biases, or outdated information.
  2. Inability to verify facts: AI language models lack the ability to fact-check or verify information. They rely on patterns and associations found in their training data and may generate false information if the data contains inaccuracies.
  3. Ambiguity in prompts: If a user provides a vague prompt, the AI model might generate responses that are not accurate. The model tries to infer the user’s intent based on the given input, but it might fail to do so correctly.
  4. Over-optimising for fluency: AI models like GPT-4 are designed to generate human-like text, which can sometimes lead to them prioritising fluency over accuracy. As a result, the model may produce text that sounds plausible but is a hallucination.
  5. Lack of a ground truth: AI language models don’t possess a deep, grounded understanding of the world like humans do. They work based on statistical patterns in data, which can sometimes lead to generating information that doesn’t make sense or is incorrect.

In addition to perpetuating biases and discrimination, AI hallucinations pose a genuine risk of causing harm. The convincingly fabricated information produced by these models can infiltrate media, academic research, and educational materials. An inattentive user might unintentionally incorporate this false content into various contexts, creating further issues.

Academic integrity in the age of AI

AI has become a potential threat to academic integrity, as tools like ChatGPT make it easier for students to access and use generated content for cheating purposes. The ease of generating human-like content could tempt students to bypass the hard work of research and writing.

Since ChatGPT’s launch in November, there has been a surge of media speculation about AI-fuelled academic dishonesty. Journalists and educators have raised concerns over the increasing difficulty of identifying AI-generated content and the possibility of it slipping through detection tools.

As we continue to develop and rely on AI systems, the responsibility falls on educators, institutions, and AI developers to create a culture that emphasises the importance of truth and the ethical use of technology. This may involve updating academic policies, providing training on AI ethics, or developing more effective tools to detect AI-generated content.

AI and Truth

Understanding Information Disorder‘ by First Draft News CC-BY 4.0.

Misinformation, disinformation, and mal-information are false or misleading pieces of information that spread through social media, news outlets, or word of mouth, often causing confusion and harm.

AI has become an accomplice in the viral spread of these types of information. Deepfakes – AI-manipulated videos and images – can deceive users with startling accuracy, making it harder to distinguish between fact and fiction. Platforms like TikTok have become incredibly problematic for the spread of misinformation, with algorithms powered by AI creating “filter bubbles” that expose users only to information that confirms their pre-existing beliefs, further amplifying false narratives.

Case Study: Language models and the spread of fake news

Research conducted by Georgetown University, OpenAI, and Stanford Internet Observatory (SIO) highlights the dangers of large language models (LLMs) and the potential for them to manipulate public viewpoints.

LLMs are trained on vast amounts of textual data and can generate meaningful text similar to humans. They are commonly used for tasks such as content creation, language translation, and text summarisation, due to their ability to generate good-quality text at scale.

The concern is that LLMs can be used to produce fake news and impersonate real individuals or organisations. The researchers used the ABC model of disinformation to examine how LLMs can be misused. The model breaks down the various aspects that contribute to the escalation of false information. ‘A’ refers to ‘Actor,’ which can be a group of individuals who create and broadcast disinformation. ‘B’ stands for ‘Behaviour,’ which refers to the strategies used to spread propaganda. Lastly, ‘C’ stands for ‘Content’ which is untrue information.

The research found that LLMs can be used to promote a hoax agenda and negatively influence people. Since LLMs can generate large amounts of text quickly, they can overflow the internet with false information, making it difficult for people to differentiate between what is true and what is wrong. Even the scale of the campaigns can be amplified with minimal costs, making manipulation harder to detect.

The researchers recommend that careful attention should be paid to the type and source of news to avoid misuse, and users and developers should ethically use the model. While LLMs are not inherently malicious, they have the potential to be wrongly used for manipulation and disinformation.

This image has an empty alt attribute; its file name is Banner_with_time-1024x683.png
Check out the upcoming PD Teaching Writing in the Age of AI

Teaching AI Ethics

Each of these posts will expand on the original and offer a few suggestions of how and where AI ethics could be incorporated into your curriculum. Every suggestion comes with a resource or further reading, which may be an article, blog post, video, or academic article.

The next post in this series will explore copyright and how AI text and image generation are prompting complex ethical and legal discussions about creativity. Join the mailing list for updates:

Processing…
Success! You're on the list.

Got a comment, question, or feedback? Get in touch:

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading