This post is the first in a series of Q&A interviews with educators working with generative AI. These posts will explore K-12 and tertiary perspectives of teachers, academics, and professionals who are grappling with the implications of these new technologies.
Anna Mills is a community college writing teacher, open textbook author, and an advocate for critical AI literacy, Open Educational Resources, and social annotation in writing instruction. Anna also curates the incredible collection AI Text Generators: Sources to Stimulate Discussions Among Teachers, which is a living, breathing document filled with resources for educators.
Anna and I have had a few conversations about the need to balance the critical and creative aspects of generative AI, and what we can do in education to share resources and help one another. Anna is also helping to turn some of my content, including the Teaching AI Ethics series, into an Open Educational Resource.
How do you navigate the fine line between the critical, pragmatic, and playful aspects when teaching about AI and with AI?
Well, I have impulses in all three directions, and I do like the Walt Whitman quote “Do I contradict myself? Very well then, I contradict myself. (I am large, I contain multitudes.)” I grew up in Silicon Valley sharing a love of math and coding with my Dad, so for part of me it’s just a lot of fun to work with language models, and I want to share that. Critiquing the workings of power is important to me, so I gravitate toward questioning AI hype and looking at AI in societal context. And I have a certain impatient, practical nature where I want to keep discussions focused and see what can be done about a problem.
I certainly don’t always know what to emphasize or which approach is right at which time. Watching my own process and giving myself permission to change my mind helps. I’ve done a lot of self-reflection through therapy, so it feels natural to question my own arguments and impulses around AI. I try to cultivate friendships where I can test ideas around AI in education in conversation with people who don’t necessarily agree; Lauren Goodlad of the Critical AI Initiative at Rutgers has been an important mentor in that respect. I suppose I also have experience trying to balance multiple goals and methods in teaching and parenting.
It’s energizing and also overwhelming trying to engage with the issue from all these angles. I do think it helps to remember it’s a collaborative effort. It feels better when we can put ideas out there quickly and informally, be open and keep revising our thinking. And it feels great when I stumble on ways to combine play, critique, and pragmatism, like prompting the models to in order teach about their limitations.
How do you address the ethical concerns surrounding the use of AI, especially AI text generators, in a higher education setting?
I’m trying not to delude myself that I do an adequate job of this. There’s a lot of lip service to ethical AI, but these systems have not been created in ways that are ethical or legal. And yet here they are, and they are useful. Teaching critical perspectives on AI is the way I approach it so that even if I am sharing with students or other teachers that I enjoy working with these systems, I will emphasize bias, inaccuracy, and fabrication. I don’t feel like I have figured out how to bring in questions of labor, copyright, or environmental impact. I do encourage teachers and students to join the larger societal discussion about policy, in part through my work on the MLA/CCCC Task Force on Writing and AI, which sometimes submits public comment to government bodies. There’s a lot of energy and interest around regulating AI and a lot of need for educators and the general public to get involved and push for this so we can do better at shaping these systems to serve the right goals.
What are some practical applications of AI you introduce to your students to enhance their writing and critical thinking skills?
At this point I’m focusing on teaching students to reflect critically on AI writing feedback. I know a language model can be used in the writing process in so many ways, but I’m concerned that when we bring it in for brainstorming and outlines and let it revise drafts, students’ awareness of their own ideas, their own decisions, and their own voices gets blurry. I want to make sure that students who are not yet confident or experienced get a chance to develop that sense of their writing as their own. With this goal, I’ve been advising on an app called myessayfeedback.ai that situates the AI feedback in the context of instructor supervision and instructor and peer feedback, the human audience that makes writing meaningful. The app encourages students to reflect on whether or not the feedback seems relevant, accurate, and in line with their purpose. This is similar to the way I’ve taught Grammarly use for years: I show students examples of misguided suggestions from the system even as I suggest they use it. Ideally, this helps build a healthy skepticism about AI suggestions in general.
Can you elaborate on your collated collection and how it serves as a resource for educators?
It’s a collection of articles and resources related to AI and education divided into categories. There are so many resource lists now, but I started this one through the Writing Across the Curriculum Clearinghouse in September 2022, so many people were glad to turn to it when ChatGPT came out. Because I chose to update it frequently in a Google Doc and allow visitors to comment and suggest, it became a kind of representation of a community of educators responding together to a rapidly evolving landscape of AI. That’s what I’m most proud of about it, and I got to explore that informal, agile, collaborative approach in a paper with Maha Bali and Lance Eaton where we cataloged open educational practices for reckoning with AI. I continue to update the list all the time, and it is still getting significant traffic. It’s important to me that it works against polarization in the discourse around AI by being inclusive of enthusiastic reactions as well as extremely critical ones.
How do you believe AI writing tools impact academic integrity, and how do you address these challenges in your teaching?
I want to see a collaborative approach that gives students plenty of chances to learn and have their grades reflect that learning. I don’t want students worrying that an unreliable and possibly biased AI detection system will falsely flag their words as AI. I’m not using AI detection software, but I am doing a number of things at once to deter misuse.
As a first step, we need to make policies about AI use explicit and specific to the variety of possible uses. Probably the most important things we can do to reduce misuse, though are existing best practices in writing instruction: We can make our writing assignments as intrinsically motivating as possible, talk about how writing helps us think, encourage reflection on the writing process, assign multiple steps in the writing process, and support writers by building relationships and giving instructor and peer feedback.
At the same time, I also think we need accountability. Even with all the measures above, I’ve had students turn in AI work this semester without acknowledging it. So I’m asking students to do more formative writing in class to get them started on their essays, and I’m experimenting with apps that provide transparency about the time spent creating a document, the revision history, and the copy/paste history. I think it makes sense to also explore proctored “distraction-free” writing spaces on campus so that “in-class” writing doesn’t need to take over class time.
As generative AI is integrated into apps like Word and Google Docs, I hope educators can advocate for standards for AI integration that make it easy for both the writer and an instructor to see what moves the writer made and what role language model assistance played.
What do you envision as the future of AI in education, particularly in writing and language classrooms, and how do you plan to contribute to this evolving landscape?
I think we’ll see increasing use of text generation in writing and language classrooms, but I don’t think the academic integrity questions will fade away. We’ll likely see a suite of strategies for establishing guardrails and transparency around AI use so that we can still design for and assess learning. I’m looking forward to creating more student-facing instructional materials on text generation as part of my textbook How Arguments Work. I’ll continue to write, lead faculty workshops, and advise on a few nonprofit or not-for-profit educational apps that draw on AI. I imagine I’ll be touching on familiar themes:
- an ethic of labeling and questioning AI text,
- building critical AI literacy more broadly,
- educator and student involvement in AI regulation,
- maintaining space for learning activities that are not AI-mediated,
- emphasizing the importance of human-written text as a mediator of human relationships,
- emphasizing the ongoing value of the writing process as a way to sharpen our thinking,
- playfulness, curiosity, boldness, and skepticism as we explore ways to teach about and with AI.
If you’d like to get in touch to discuss Generative AI or you have work you’d like to share with the community, please contact me via the form below: