Generative AI doesn’t “democratize creativity”

blue and purple color pencils

Last weekend, I saw a LinkedIn post from influential AI-educator Ethan Mollick, in which Mollick presented a YouTube channel of videos created using GenAI.

Glaring copyright and IP issues notwithstanding, one thing in particular about the post rubbed me up the wrong way: the term “democratizing” applied to AI and creativity.

Link to original post

Mollick shares a lot of useful information about Generative AI on LinkedIn, and it’s definitely worth joining his 100,000+ followers for the heads-up on research, new releases of technology to which he tends to get early access, and for general insights about GenAI.

But I felt compelled to challenge this idea of “democratising creativity” because, frankly, I’m sick of hearing it. I shared the post with some of my thoughts, and it’s safe to say that it created all of the feels. There was a good mixture in the comments of “hell yeah” enthusiasm and “you’re absolutely wrong” critique, so I decided to spin it up into a longer response.

I want to explore some of the arguments for and against the idea that Generative AI technologies such as ChatGPT, Midjourney, Udio, Sora, and others “democratise creativity”. I can only speak from my own experience, and perhaps I speak from a privileged position as an author, a confident writer who actually enjoys writing, and a some-time amateur musician, illustrator, and code-hobbyist. I’m a creative type, I suppose, so perhaps my view is narrowed by my perspective on what creativity “is”. But it’s always worth exploring both sides of an argument.

Problematising “democratising”

I want to kick off with the term itself (and the slightly chewy term “problematising democratising”: say that ten times in a row). “Democracy” is a loaded term. It’s a term which means, at its most basic level, the rule of the majority. But like many political and social terms, “democracy” and “democratic” have be co-opted to mean pretty much whatever people want them to mean. Applied to technology, the term is no less ambiguous.

TikTok, for example, is “a threat to democracy“. This argument stems from the Chinese ownership of Baidu, it’s data collection policies, and how that data may be used to influence people. Meta – ostensibly just as guilty as Baidu for manipulating its users – claims it is “advancing democracy” through, among other things, “countering government abuse of digital technologies.” OpenAI were among the tech companies in 2023 begging to be regulated, and simultaneously working on ways to “make AI more democratic” through crowd-sourcing values. But generative AI, largely spurred on in the last two years by OpenAI’s cavalier approach to product-launches, is considered by some to be potentially catastrophic for democracy.

And then we get to the concept of “democratising creativity”. What does it mean? And where does the phrase come from? A (good ol’ fashioned) Google search for the phrase “AI democratizes creativity” results in an avalanche of LinkedIn articles, techbuzz blog posts, and presumably GenAI generated spam filled with “unleashing” and “delving”.

But amongst the dross, there are some more legitimate writers using the phrase. Ethan Mollick, who kick-started this entire post, is one. Matthew Hutson, contributing author for the New Yorker and an author who has written for Science, Nature, Wired, The Atlantic, and The Wall Street Journal appears to be another. In an interview published on Zendesk, Hutson is quoted as saying:

“The major upside of AI is that it democratizes creativity,” Hutson said. “So you might have someone sitting at home with an idea for a movie, and they can sort of work on it on their own, they don’t need to raise millions of dollars in funds, they can just sort of experiment and play around and put it out there. And it’ll lead to a huge sort of a Cambrian explosion and the evolution of different perspectives and different voices and different ideas out there in the marketplace.”

https://www.zendesk.com/au/blog/hutson-ai-podcast/

From this, and Mollick’s post, it appears that the phrase means AI will make the process of idea generation, experimentation, and “play” (or “fun”, in Mollick’s post), more accessible to everyone.

I’m still not convinced.

For and against

Because I’m not averse to a little irony, I have used Claude 3 Opus – a generative AI chatbot from Anthropic – to summarise the LinkedIn post and its subsequent discussion. This is a decidedly uncreative use of GenAI, but one which the technology is perfectly suited for.

Note that these are not my arguments for an against – I’ll get into those later. This is a summary of Mollick’s original post, my subsequent re-share, and the comment thread below my post as of Sunday morning:

Arguments supporting the idea that AI democratizes creativity:

  1. AI lowers the barrier to entry in creative fields by making it easier for more people to produce impressive outputs.
  2. AI can help generate ideas, variations, and rough drafts, empowering people to overcome creative blocks and accelerate the initial creative process.
  3. AI boosts productivity by allowing creators to achieve desired results more quickly.
  4. AI closes the technical gap between vision and creation, enabling more people to express their ideas.

Arguments against the idea that AI democratizes creativity:

  1. Creativity was never undemocratic before AI, as it has always been accessible to anyone willing to put in the hard work, discipline, and skill development.
  2. AI makes it easier to produce certain kinds of media using technology built on a large corpus of unlicensed creative work, which raises questions about the value of traditional skills and fair compensation for original creators.
  3. AI does not replace the need for creativity, which requires ideas, problem-solving, and a unique voice.
  4. The notion of AI democratizing creativity is an illusion for insecure people who value their incompetence, as true creativity requires hard work, discipline, skill, inspiration, and ability.
  5. AI is more undemocratic since it “steals” from the best artists in the world to rehash their work, which is unethical compared to the traditional way of developing skills through practice and learning.
  6. The phrase “democratizing art” is an excuse for ignorance, lack of understanding of how other professions work, and laziness.
Claude 3 Opus Summary of comment thread from this LinkedIn post, as of Sunday 9am

There are valid points on both sides, but obviously I tend to agree more with the arguments against. I will concede that GenAI may lower the “barrier to entry” for some elements of the creative process. For example, it is far easier to create digital images with Adobe Firefly than with Adobe Illustrator: Firefly requires only a text prompt, whereas Illustrator could take a lifetime to master (ignoring the fact that Firefly is now being embedded into all Adobe image products).

Likewise, it is easier and faster to generate ideas and brainstorms with ChatGPT than it is to sit down with a pen and paper and generate them yourself. But easier doesn’t necessarily equate to better, and it also isn’t inherently “more democratic”. I’ll talk more on this point later in this article.

I stop well short of the harsher critiques in the comment thread (such as people who say AI democratises creativity are “insecure” or “lazy”). I use GenAI myself extensively, and I consider myself to be neither of those, and still creative. There has to be a middle ground between the overhyped “embracing” of GenAI (another term I find problematic – it’s not a teddybear…) and the accusation that anyone who uses GenAI is inherently flawed or creatively inept.

“AI democratizing skills is like shoplifting democratizing shopping.”

And the prize for best comment goes to illustrator Marc Andresen, for this one-liner which effectively sums up one of the biggest concerns in the arts community about Generative AI. Essentially: most GenAI models are built on unlicensed, “stolen” content.

This is a hugely contentious issue, and one which I’m not going to dwell on in this article. I have recently run a session on for the Copyright Agency in Australia, and it’s safe to say that the provenance of AI data is a huge concern for everyone involved, from creators to those like CA who help with the licensing and attribution of intellectual property. As an Australian author, I receive royalty payments from the Copyright Agency annually, based on a range of factors including borrowings and educational use of my texts. But copyright in Australia, just as with the rest of the world, was unprepared for Generative AI.

Thus far, developers like OpenAI have been “getting away with it” based on laws which permit fair use of scraped web data for research purposes. To create a GPT-class Large Language Model, the datasets need to be… well… large. This generally means an indiscriminate scraping of the web, and datasets which may well include copyrighted materials. Similarly, image generation developers like Stability AI have used scraped image datasets and datasets which contain unlicensed intellectual property.

OpenAI and other AI developers are being sued the world over, and things are showing signs of changing. In the EU, the recent AI Act attempts to balance legitimate research purposes with recognition of rights holders’ privileges. In the US, congressman Adam Schiff has just introduced a bill titled the Generative AI Copyright Disclosure Act, which would require developers to alert the Register of Copyrights to any copyrighted materials used in training data 30 days prior to the public release of any new model. In addition, Ed Newton Rex – formerly of Stability AI – has formed a new company to help with the transparency and licensed creation of training data. Fairly Trained offers certification to organisations that can demonstrate compliance with licensed data.

But the problem won’t go away any time soon. Just as quickly as one developer is sued for copyright infringements, another pops up with an equally shameless disregard for intellectual property. In the past couple of weeks, both Udio and Suno have released public access to their music generation models, both of which are almost certainly built on top of unlicensed copyrighted training data.

Despite the jump in quality of these music generators from past efforts, not everyone is impressed.

Barriers to entry

Another key argument in favour of Generative AI democratising creativity is that it “lowers the barrier to entry”. As I said earlier, I agree that GenAI tools can create shortcuts in technologies and tools which otherwise might have a steep learning curve. The embedding of Adobe’s Firefly model into Photoshop via the new “generative fill” and “generative expand” tools, for example, makes using that software very straightforward for tasks which might otherwise take hours to learn.

Adobe has also recently hinted at a new tool for audio creation with GenAI rather unimaginatively called Project Music GenAI Control (hopefully it will get a new name before release).

Having used, and taught, software like FL Studio, Ableton Live, and Logic Pro I can see how a tool like Project Music GenAI Control would be a useful part of the digital music creation workflow. Each of the “traditional” tools mentioned above has a pretty steep learning curve, particularly if, like me, you’re trying to teach it in a 2 hour per-week vocational subject. Ditto that for teaching tools like Adobe Photoshop and Premiere Pro in a Media elective.

BUT, and it’s a big, bold, capital-lettered but, are barriers to entry for these tools necessarily a bad thing? Quality tools don’t become industry standard because of their simplicity or limited features: it’s partly the complexity and nuance of software like these which makes them so valuable to creatives who are willing to put in the time to learn how to use them to their fullest.

As for other “barriers to entry” for creativity, those in favour of the democratising narrative suggest that everything from cost, to accessibility, to geography, to social class can stand in the way of making art, and that GenAI is the solution.

I’m not having it.

The Practical AI Strategies online course is available now! Over 4 hours of content split into 10-20 minute lessons, covering 6 key areas of Generative AI. You’ll learn how GenAI works, how to prompt text, image, and other models, and the ethical implications of this complex technology. You will also learn how to adapt education and assessment practices to deal with GenAI. This course has been designed for K-12 and Higher Education, and is available now.

There is little standing in the way of an average person learning a creative skill. Arguing that GenAI democratises creativity because it improves access to technologies which can be used in the creative process is flawed logic. If an individual has access to Generative AI – requiring a device, an internet connection, and, in the case of many truly useful applications a subscription fee – then they have access to non-GenAI tools as well.

For example, anyone who has access to Adobe Firefly has the means to access Adobe Illustrator. There is no technological or economic barrier which separates the two. There are open source GenAI image generation models which can be accessed at no cost, but there are also non-GenAI tools (e.g., Gimp) which can be used for free.

For music generation – with tools such as Udio and Suno, for example – the argument runs pretty much the same. It is possible to generate complete, coherent (though short) music clips with these platforms. But it is possible – albeit more time consuming, and with a steeper learning curve – to achieve the same or better with traditional methods. A single person can perform all of the roles of a complete band/music producer/production company, as long as they are able to dedicate the time, the skills, and the effort required.

As for the LinkedIn chat about privileged pianists… well. We can save the back-and-forth about Ray Charles, Elton John, and Chopin for the comments thread.

GenAI and creativity: not a democracy, but not hopeless either

If you want something a little more academic to chew on regarding GenAI and creativity, I’d suggest checking out Professor David Cropley. Cropley has written some extensive critiques of the off-handed ways people are attributing creativity to GenAI, including through the use of human-centred creativity tests. But, just as I’m suggesting in this post, Cropley doesn’t argue that we should entirely avoid AI either. In fact his most recent article, Fit-For-Purpose Creativity Assessment, presents a compelling argument and significant data to support the use of GenAI in assessing human creativity. Cropley and co-authors Caroline Theurer, A C Sven Mathijssen & Rebecca L Marrone argue that, given GenAI is more adept at routine, predictable tasks, humans “will remain at the core of creativity” for the foreseeable future.

I don’t think it serves anyone to go too far in either direction in debates like this. Yes, GenAI is a hugely problematic, ethically nightmarish technology that is being deployed by a handful of techbros who seem to have no regard for other people’s rights. But it also works, and it’s becoming quickly ubiquitous. Regulation has been slow to move, and GenAI isn’t going anywhere. Sitting so high up on the ethical high horse that you can’t engage in the discourse doesn’t achieve anything. Much rather the position of people like Ed Newton Rex and academic/activists like Dr. Joy Buolamwini, who are both deeply entwined with the technology, and ethically driven.

Buolamwini’s Algorithmic Justice League (AJL), for example, is currently publishing a lot of support and advice for creatives around copyright. Dr. Buolamwini also has a PhD in Computer Science, is a poet, artist, and author, and understands the industry at a deep level. You can check out AJL’s #MyWorkMyRights here: https://www.ajl.org/writers

📢 What makes a human?
Our stories matter. Our rights matter. The unique voice and value of writers is under threat. This isn’t just about technology; it’s about preserving cultural heritage and ensuring that creativity remains a human endeavor. Stand with us as we advocate for the rights of writers in the age of AI.
➡️ Make your voice heard at ajl.org/writers. #MyWorkMyRights @ajlunited
via: https://www.ajl.org/writers

People like Newton Rex and Buolamwini are both on the inside of AI (inculding GenAI in Newton Rex’s case, and predictive/recognition algorithms in Buolamwini’s) and outside of it, pushing back.

And I think that’s the approach we have to take with AI and creativity. A more nuanced understanding that, yes, AI can be used in creative ways, but no, it is not a “democratising” tool.

To suggest that the problematic industry behind Generative Artificial Intelligence has anything to do with democracy is an insult to both the idea of democracy and the artists, writers, musicians, and academics whose life’s work has been reduced to vectors and weights in models that only generate profits for the already wealthy. Artists won’t be the ones getting rich from GenAI, even if they learn to embrace, harness, navigate, and delve into it.

If you’d like to get in touch to discuss Generative Artificial Intelligence policy or consulting, or just throw around a few creative ideas with a (mostly) real human, get in touch via the form below:

2 responses to “Generative AI doesn’t “democratize creativity””

  1. […] can brainstorm (and no, GenAI doesn’t “democratise creativity”), but there might be times when human idea generation can be augmented by AI assistance, or times […]

  2. […] and interact with content across various domains, including text, image, audio, and video. While generative AI doesn’t “democratise creativity“, it will impact upon many creative and academic areas. As these technologies continue to […]

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading