This is the second post on the topic of AI writers. For the first post, on the nature and purpose of essay writing when AI can write almost as well as our students, click here 👇
Critical Literacy and Digital Texts
Critical literacy is an essential skill, both in the English classroom and beyond. It requires the analysis, synthesis, and interpretations of texts that students are exposed to.
In reality, students are exposed to more digital text now than print. If we broaden the definition of texts beyond the written word – as we should – then digital texts include websites, audio, and particularly short-form videos like TikToks. TikTok replaced Google last year as the most viewed website – knocking it off its 15 year perch and proving that videos are vital texts not just for entertainment, but also information.
Some teachers and schools have started to explore TikTok and other digital texts in the classroom, for example constructing lessons around how creators and influencers construct their videos and discussing framing, sound, music, audience and purpose. These lessons are hard to manage in school settings where often the platforms are simply blocked, as if that will prevent students from accessing them. But teachers are persistent, finding ways to turn TikTok style lessons into dramatic activities, or making films using cameras and software that does not require access to the app. These lessons are a crucial part of the new literacies students must be prepared for.
But TikTok and video texts are not the only aspects of digital literacy teachers need to be across. With the rise of AI natural language processing (NLP) technologies, like the GPT-3 model I explored in the previous post, and ‘deep fake’ technologies that work with audio and visual, there is another element of digital literacy that we need to consider: how do we teach students to be critically aware of what is producing the texts they consume?

Machine Bias
AI often relies on huge datasets to “learn”, and there are inherent biases in many of these datasets. For example, representation bias occurs when the dataset does not accurately represent the population: such as collecting data via smartphone use which under-represents groups less likely to own smartphones, including those with economic disadvantage, and older populations. There is an inherent ‘cognitive bias‘ in texts online towards white males, and historical biases based on the use of past data to produce new decisions.
This may not seem important, but it is worth noting that AI is already being used to “make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant.” It is easy to see how bias in AI could lead to discrimination based on race, socioeconomic status, gender, or other factors. The kind of bias which poses a risk to these uses of AI also exists in AI written texts, from advertising copy through to AI written fiction.
While people are working on ways to minimise, remove, or algorithmically counteract bias, it is important to educate students in the first place on the existence of bias. Just as we would guide students in the Literature classroom to explore texts through a post-structural lens, or from a feminist or marxist perspective, we need to use the same tools of critical literacy to help students identify the potential sources of bias in machine written texts.
Interested in how the future of technology is going to impact on our students’ lives? Join the mailing list to stay up to date on articles like this. Subscribers also get a free collection of mentor texts for VCE English and EAL 2023 based on the idea of ‘Futures’
How do we teach students to identify bias in machine writing?
While some of this may seem like unfamiliar and perhaps even intimidating territory, it’s nothing new. We have always taught students to identify the bias in texts – particularly media texts. For their part, media companies have a long history of obfuscating their own biases to present their arguments as legitimate.
The first step is to help students identify when something has been written by a machine, or “coauthored” by AI software and edited by a human. In journalism AI is already ubiquitous. The Washington Post, for example, has been using its Heliograf software for a variety of applications since 2017, from reporting on High School Football to producing audio updates on elections. Ethics (and in house rules) dictates how, when, and where these organisations declare the use of AI in their publications. Because the technology may be proprietary, there could be additional barriers to getting to the source. Step one then, is to at least be able to identify that the potential for bias exists. In this, digital critical literacy is no different to the skills we have been teaching students for years.
The second step is to encourage students to explore the source’s potential biases. In the same way we would get students to examine the political bias of the The Herald Sun versus The Age, digital texts need to be approached with the assumption that authors are working within a particular frame of reference and with an audience and purpose in mind. If an AI writer or co-writer is identified, this means exploring some of the dataset and parameters it operates under (if possible), as well as looking at the publication as a whole.
Teachers may not have the time to keep pace with the rapid technological advancements occurring in the digital space. However, we can use tried and tested methods of teaching critical literacy that will help students to navigate these texts.
Leave a Reply