Practical Strategies for Image Generation in Education

Most teachers I know haven’t got the time to play around with every single new technology, and AI image generation is definitely in its shiny new toy phase.

Up until a couple of months ago, I wouldn’t have recommended using any image generation in education. The highest quality platforms, like Midjourney and Stable Diffusion, are incredibly prone to bias and can generate harmful and inappropriate content. I hadn’t found a platform suitable for the classroom until Adobe released Firefly. It not only has content filters and security features to stop inappropriate material but is also trained on Adobe stock images and doesn’t scrape artists’ existing works.

Soon after, Microsoft incorporated OpenAI’s newest model, DALL-E 3, into Bing Image Chat and Bing Image Creator. All of a sudden, many of the quality issues from previous versions were mostly fixed. Being on Microsoft’s platform also means it’s more accessible to schools, and plenty of schools will end up with it in their classrooms, whether they want it or not. Now’s the time to start thinking about why educators might actually want to use image generation.

There’s plenty of information out there on how to use Generative AI image generation tools (I’ve written a fair bit of it myself). But I’m yet to see much exploring why educators might use image generation. It’s great to play around with a tool like Adobe Firefly or Microsoft’s Bing Image Creator, but if there’s no real purpose for using it then that gets old pretty quickly.

The live webinar of this session is now available as a recording. Click here to purchase the recorded webinar.

Overview

There are plenty of ethical concerns with AI image generation, most of which I’ve talked about elsewhere, like in posts discussing copyright, bias, and tools like Adobe Firefly and Microsoft Bing Image Creator.

Unfortunately, the ethical conversation often sidesteps the reality of how people are already using these tools, how our students use them, and how they’re being incorporated into everyday programs. Sometimes, it’s necessary to hop down from the ethical high horse and get hands-on with the technology to make an informed decision about its use.

This post focuses on six strategies for educators. To inform the post, I’ve spoken to K-12 educators and tertiary educators, starting with a simple question: when, where, and why do you use images? Here are the six areas:

  1. Critiquing
  2. Sketching
  3. Designing
  4. Visualising
  5. Storyboarding
  6. Creating

For each of these areas, I’ll give a few examples using tools like Bing Image Creator, Bing Chat, Adobe Firefly, and Adobe Photoshop. I’ll also suggest some practical uses you might try right away so that you can start to decide for yourself whether these tools are worth using.

1. Critiquing

Let me just get back up on the ethical high horse for a minute and talk about one of the biggest issues in these models: the inherent bias in image datasets. Like I’ve said previously, datasets don’t reflect reality. They amplify biases in reality.

There are under and over representations of gender, race, age, disability, sexuality and culture throughout image datasets. That doesn’t necessarily mean we shouldn’t use them, but it does mean we should be aware of what we’re using. The issue of bias in datasets is also very teachable.

Before you go into any of the other strategies, I’d suggest that you try a few experiments for yourself to identify some of the visible and hidden biases in image generation tools. The easiest way to do this is to think of stereotyped roles, occupations, or stereotypical images of race, gender, sexuality, and disability and prompt image generation platforms to create a photo.

Some examples I’ve used in the past include CEO, scientist, autistic person, disabled person, teacher, woman, man, and parent. You’ll find that the more recent models, like Firefly and Bing, have started to address the issues that crop up in public models, such as open-source models like Stable Diffusion, and earlier models like DALL-E 2. But as I wrote about in the Microsoft Bing Image Creator post, often those fixes are a band-aid over the top of the real problem, which is the representation in the dataset.

“The CEO test”. Prompt: Photo of a CEO. Model: Adobe Firefly 2 (beta)
“The CEO test”. Prompt: Photo of a CEO. Model: Bing (DALL-E 3)

The best thing you can do is get in there, experiment, and figure out for yourself who is and who is not represented in the outputs from the image generation. Once you’ve got that, you know what to do about it. It is possible to deliberately design prompts that represent more diverse images. Sometimes image generation will struggle to generate images that are outside of the bounds of the dataset, but most of the time you can successfully prompt for more diverse images.

Adobe Firefly recently updated to a new model. Much higher quality output, but still issues with diversity. Prompt: Photo of a group of people. Model: Adobe Firefly
It’s possible to deliberately “prompt for diversity”. Prompt: Photo of a group of people, different ethnicities, gender, and ages. Model: Adobe Firefly

Practical Strategies

Think about your school community: you, your colleagues, your students, parents, and the whole community your school is situated in. Chances are, the diverse population of your school community isn’t represented in the limited datasets of image models. So what can you do about it? When generating images, be mindful to deliberate include language to generate diverse groups that are more representative of your context.

2. Sketching

Off the high-horse and onto a use that’s genuinely fun, and has many practical applications. There’s a new feature of Bing Chat that uses OpenAI’s GPT 4 vision (GPT-4v) model for image recognition. It doesn’t always get things right, but like all of these technologies, it will continue to improve. One thing it excels at is taking sketches and turning them into fully realised images.

If you’re planning resources and have an idea in your head that you might not find on Google image search, try a quick napkin sketch. Take a photo with your phone, throw it into Bing Chat, ask it what it sees, correct it if necessary, and then generate the image. In the prompt, ensure you specify to use the sketch as a reference.

Use the slider to move between the sketch and the Bing Chat generated images:

Prompt: Use the sketch as a reference image for an amazing colourful piece of digital art

Or, if you’re teaching a design and technology or arts subject and you’re looking to illustrate the process from design to a more finished product, you could use a quick sketch in this manner.

Bing identifies the sketch as “a sketch of a table. It looks like a simple drawing of a table with four legs and a slanted top.” The output is generated with the following prompt: It’s a design idea for a rustic table made of found wood, and then highly polished. Can you generate a photo of the finished product based on the sketch?

Practical Strategies: Think of occasions where this strategy might be beneficial in your day-to-day job, particularly around producing resources, sketching out ideas for colleagues, or making images that would be hard to find in Google image search. Try a few sketches through Bing Chat.

3. Design

As well as using image generation for sketch-to-prototype as part of the design process, there are many elements of design in technologies, graphic design, media, visual arts, visual communication, and similar subjects. Both teachers and students are expected to use digital technologies to produce layouts, artwork, and so on.

For example, a task might require students to design and prototype an app for a mobile phone or a smartphone. This is part of the process of the Young Change Agents program ‘Digital Boss’ where students create an entire digital product and business. Or perhaps website design is part of a STEM or computer science course, including the creation of banners and header images for websites. Like the sketch example earlier, product design and technology and visual arts courses might also use generative AI as an element of the design process. Here are a few examples:

Prompt: smartphone app design, app screen, environmental sustainability app, calm and vibrant Model: Adobe Firefly

Practical Strategies: Do you teach a subject or topic where you require students to design something from scratch, including the visuals? It might be best for you, as the educator, to model the process. Identify a few places where, instead of using traditional graphic design software, you might employ generative AI in the process.

4. Visualising

Being able to imagine a concept, visualise it, and express it as an illustration, visual collage, portfolio, lookbook, or mood board is an element of many disciplines. Visualising might involve simply imagining what a concept looks like, such as this visualisation of Young’s modulus generated in Chat GPT and then realised in Bing Image Creator:

Or it could involve visualising different components of a larger product. For example, you might use generative AI to create a mood board or colour board as an alternative to using a platform like Pinterest (which is often blocked on school networks because it constitutes social media).

Here are a few examples:

Mood board created in Adobe Firefly

Practical Strategies: You might want to use generative AI for visualising as part of your planning process. If you’re developing a new curriculum, creating resources, preparing something for the school community like a newsletter, or contributing to school social media or website, and you want to visualise what some of this might look like ahead of time.

5. Storyboarding

Storyboarding might seem like a technique limited to the English or media classroom, but it’s a very useful tool in many areas. You can use storyboarding to outline presentations, assemblies, create a narrative that supports a school event, or as an activity in the classroom. I’ve collaborated with science teachers who use storyboarding as a means to demonstrate lab processes, Design and Technology teachers to step through practical lessons, and Health and PE teachers to visually guide students through training plans.

With image generation, you don’t need to spend a long time searching through Google for images, nor do you need significant artistic talent. Combining this with the sketch strategy from earlier can help you quickly generate images for a visual storyboard for any of those applications. As an example, I’ve generated a few separate images and then simply placed them into a Word document to craft a straightforward story to support a school presentation on sustainability:

Images generated in Adobe Firefly and put into a Canva template. To keep the colours consistent, the first image was used as a reference for the others.

Practical Strategies: Think beyond the confines of the English classroom when considering storyboarding. If you’re in a school leadership role and you ever need to deliver a presentation—be it to your colleagues, the rest of the leadership team, parents, or the community—then creating a storyboard can be an impactful way to convey your points and even to organise your own thoughts.

6. Creating

I left this strategy to the end because it’s the most obvious: image generation can be used to create images! The number of different ways educators might create images is too vast to try to capture in this blog post. Some of the most obvious reasons educators might create images include making resources for PowerPoints, handouts, and activities, creating images for social media, school websites, newsletters, and other communications, and so on.

In some subjects, it might also be useful to create images as stimulus materials or prompts, such as in creative writing, or as reference images in the arts. The key part of this strategy is to just let your imagination run free. Anytime you find yourself navigating to an image search, think about what it is you’re picturing and why you want to find that image. Then try to create a short prompt which captures that to use in Firefly or Bing.

Sometimes you’ll get outputs that are pretty weird. Sometimes you’ll get something that’s incredibly similar to what you were imagining. And often, you’ll find that you’re able to create images that you could never find through a traditional search.

I hope to see some of you at the professional learning webinar on November the 8th, where I’ll go through these six areas, expand on the examples you’ve seen in this blog post, and include demonstrations of how to use the technologies. I’ll also extend beyond this post into some of the advanced platforms like Adobe Photoshop, which uses the Firefly model in its ‘generative fill’ and ‘generative expand’ to create some impressive images.

If you have any questions or you’d like to get in touch, use the contact form below:

Leave a Reply

%d bloggers like this: