Some technologies are created with values, others have values thrust upon them

Back in 2023, I wrote an extensive series of articles called Teaching AI Ethics in which I explored nine areas of ethical concerns with artificial intelligence. In those early articles, I argued that it is absolutely necessary to wrestle with the ethical challenges of artificial intelligence, particularly as generative applications such as ChatGPT, Microsoft Copilot, and Google Gemini become ever more ubiquitous.

The series of articles has since become the most visited part of my website, with tens of thousands of views, and it has also been spun out into an open educational resource, webinar series, and an upcoming online course. But the discussion of AI ethics doesn’t end by simply acknowledging that the problems exist.

In this post, I’m going to continue to push at the ethical complexities of artificial intelligence, but I’m also trying to find a way that the use of these technologies can be justified.

Conflicting Perspective/s

I’m a consultant who works with generative AI, a business owner who uses generative AI, an author, a PhD candidate, and a former teacher. As such, my opinions on the use of these technologies are… mixed.

The researcher in me is drawn to critique these technologies, which perpetuate bias, have huge environmental costs, risks to our personal privacy and safety, and take a heavy toll on already disadvantaged people.

The consultant in me wants to work with people in understanding these ethical concerns but also in how to use the technologies. I work with K-12 schools, universities, and businesses to try to find ways to write sound policies and guidelines, and to aim for appropriate and ethical use of AI.

As a business owner of a company of one, which in the last 12 months has developed very rapidly, I use generative artificial intelligence in many ways, including automations, coding assistance, and with my writing and publications.

As a former teacher and a sessional academic for initial teacher education, I am constantly torn between educating people about the potential benefits of the technology and the huge challenges. I’m also always trying to see the technology from the students’ perspectives.

I don’t mind the struggle; in fact, I feel that it’s possibly the only healthy way to respond to these technologies. But trying to articulate ways in which generative AI use can be positive and productive, while knowing many of the contentious ethical concerns, does create a lot of tension.

Talking ethics with generative AI chatbots

I have largely avoided discussing important issues with chatbots for a few reasons. First of all, I find technology applications like ChatGPT to be nauseatingly techno-positivist and optimistic. By now, many of you will be as familiar as I am with phrases like “navigating the ever-evolving landscape of generative AI” and imperatives compelling us to “delve in” and “embrace technologies.” I’m not exactly sure why LLMs like GPT uses so much nautical language (navigating, delving, exploring) or why are we incessantly told that we need to embrace technology like it’s a loved one or a favourite teddy-bear, but it’s insufferable.

Trying to hold a serious conversation with ChatGPT about generative artificial intelligence is like talking to a cultist who’s seen the light. There are also the issues of bias, perspective, and all of the other ethical concerns discussed in my original series, meaning that I find it generally more palatable to have these conversations with an actual human who knows what they’re talking about.

But as each new chatbot is released, I like to present a few more complex questions, and so recently, I engaged with Anthropic’s Claude 3 Opus in a conversation about generative AI. Here’s how it went.

Claude on ethics

PROMPT: Let’s talk ethics and AI. You will draw on knowledge of multiple cohesive and sophisticated ethical frameworks in this discussion. I will ask you questions and you will respond. Your responses must be framed by ethical theory.

The conversation begins with a fairly broad and generic prompt. As a result, I get a pretty generic response from the model:

Certainly, I would be happy to engage in a discussion about ethics and AI, drawing upon various ethical frameworks to inform my responses. As an AI system, I have been designed to consider and reason about ethical questions, and I am well-versed in a range of philosophical perspectives on morality and ethics.

Some key ethical frameworks that are likely to be relevant for this discussion include:

Utilitarianism – The view that the morally right action is the one that produces the greatest good for the greatest number. A utilitarian analysis of AI ethics would likely focus on weighing the potential benefits and harms of AI systems on human wellbeing.

Deontology – The view that there are certain inviolable moral rules or duties that must be followed regardless of consequences. A deontological approach to AI ethics might focus on identifying fundamental rights and responsibilities that should constrain the development and use of AI.

Virtue ethics – An approach that focuses on moral character and what virtues an ethical person or entity should embody. Analyzing AI ethics through a virtue ethics lens would involve asking what traits or guiding principles should be built into AI systems.

Care ethics – An ethical framework that emphasizes empathy, compassion, and attending to the needs of particular others for whom we take responsibility. Applying care ethics to AI would highlight the importance of designing AI systems that can engage in context-specific moral reasoning and that prioritize the wellbeing of those impacted by their actions.

These are just a few of the many ethical theories and frameworks that can inform our thinking about the ethics of artificial intelligence. I look forward to exploring how these different perspectives can shed light on the moral questions and challenges posed by AI systems. Please feel free to begin by asking your first question!

Claude 3 Opus response

Utilitarianism and deontology are standard LLM fare for ethical frameworks. It’s perhaps a bit worrying that the default position of GenAI is a ‘greatest good for the greatest number’ response to most problems…

All of these responses are pretty vanilla, and they don’t offer much in the way of advice on actually coming to terms with or responding to the ethical concerns.

PROMPT: Too generic. I am looking for much more specific ethicists and approaches which have been applied to understanding the digital world.

I apologize for my previous response being too generic. You’re absolutely right that there are specific ethicists and approaches that have been applied to understanding the ethics of the digital world and artificial intelligence. Let me provide a more targeted response.

One key figure in digital ethics is Luciano Floridi, who has developed the philosophy of information as a framework for understanding the ontological and ethical implications of information technologies. Floridi argues that we are living in an “infosphere” where the distinction between online and offline is blurring, and that we need an information ethics that can guide our actions in this new reality. He proposes that the fundamental ethical principle should be the flourishing of informational entities, which includes both human beings and artificial agents.

Another important perspective comes from the field of machine ethics, which aims to create AI systems that can reason about ethics and make moral decisions. One approach within machine ethics is bottom-up morality, as advocated by thinkers like Wendell Wallach and Colin Allen. The idea is to create AI systems that can learn ethical principles through experience and feedback, rather than being programmed with top-down rules. This approach draws on the work of developmental psychologists like Jean Piaget and the idea that morality emerges through interaction with the environment.

The ethics of algorithms is another key area of focus. Cathy O’Neil, in her book “Weapons of Math Destruction,” argues that many algorithms perpetuate bias and discrimination, and that we need algorithmic accountability and transparency to mitigate these harms. Meanwhile, Tal Zarsky has proposed the idea of “transparency by design” as a way to build ethical considerations into the development of algorithmic systems.

Finally, the ethics of AI is often approached through the lens of human rights. The Toronto Declaration, developed by Amnesty International and other organizations, asserts that the development of AI should respect and promote international human rights law. Similarly, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has developed a set of principles for ethically aligned design, which includes respect for human rights, wellbeing, accountability, and transparency.

These are just a few examples of the specific ethicists and approaches that are shaping our understanding of digital and AI ethics. The field is rapidly evolving, but these perspectives provide a foundation for grappling with the complex moral questions posed by emerging technologies. Let me know if you would like me to expand on any of these ideas or if you have additional questions!

I found it interesting how Claude 3 handled the request for more information. Claude has no internet access, meaning it is entirely reliant on its dataset for knowledge. The data cut-off for this model is August 2023, but you might expect more hallucinations. Cathy O’Neil’s book is great, though not really an ethical approach in and of itself. Tal Zarsky’s ‘transparent predictions’ work predates Generative AI and deals with data analytics and privacy, though the idea of transparency by design could certainly be applied to Generative AI.

The IEEE Global Initiative is legit, as is the Amnesty International Toronto Declaration, though again these frameworks are applied to other forms of predictive algorithms, and not GenAI per se. Still, marks for effort, and much better than a typical GPT response.

Time to get a bit more specific.

How is it ethically conscionable to use generative AI products like chatgpt and Claude, knowing that there are such inherent issues in the technology as bias, discrimination in the output, copyright and intellectual property theft in the construction of models, and the human labour cost of low paid workers in poor countries labelling and classifying abusive and inappropriate data?

A few patterns started to emerge at this point. Claude “hedges” a lot more than ChatGPT, rather than GPT’s overly optimistic perspective towards technology. It will, as with all LLMs, backtrack, apologise, and pander to every request. However, it also “defends” some of its standpoints. For example, when I criticised the doctrine of double effect, it offered a few counterpoints and responded:

So while I stand by the idea that there are some cases where using generative AI can be ethically justified on the grounds of double effect, I acknowledge this isn’t a wholly satisfying answer to the deeper issues you raise. At best, it’s a framework for individuals to assess the ethics of their own use of AI, but it doesn’t resolve the more systemic problems with how these technologies are being developed and deployed.

Claude defends its arguments. Sort of.

The conversation continued in this vein for a while, and I’ll spare you the entire transcript. The model did offer an interesting analogy at one point, which is worth further exploration.

A very common response to AI use is the idea that all technology is ethically problematic, and AI is no different. This was posed to me by a Year 12 student who argued that, for instance, driving cars it ethically problematic due to issues such as environmental cost, road traffic accidents, and so on. The analogy also came up in a recent comment thread on LinkedIn, wherein my PhD supervisor, Dr Lucinda McKnight, had spoken with the Association of Heads of Independent Schools of Australia (AHISA) and stated that “there is no ethical way to use AI.”

McKnight’s response to the “cars are unethical” debate is that it is not “a good logic that one planet-harming practice justifies another”. So, I posed the problem to Claude later in the chat., with the prompt:

A common argument is that “all technology is inherently ethically flawed, and AI is no different”. For example, “driving cars is unethical due to pollution, accidents, etc., and yet many people choose to drive.” Challenge this argument and suggest alternatives.

The response was actually a lot more nuanced than anything I’ve seen so far from ChatGPT or other LLMs:

The argument that “all technology is inherently unethical and therefore AI is no different” is a form of moral relativism that essentially absolves us of the responsibility to critically examine the specific ethical implications of new technologies. It’s a way of throwing up our hands and saying “well, everything is problematic, so why bother trying to do better with AI?” But I believe this line of thinking is misguided and even dangerous.

While it’s true that virtually all technologies come with some risks and potential for harm, the scale and nature of those risks vary widely. The ethical implications of car usage, while significant, are relatively well-understood and there are established systems (however imperfect) for managing them – things like safety regulations, emissions standards, traffic laws, and so on. The harms of cars are also more limited in scope – they don’t generally threaten to undermine fundamental rights or reshape power structures in the way that AI systems can.

With AI, and particularly generative AI, we’re dealing with a technology that is much more powerful and far-reaching in its potential impacts. AI systems can perpetuate and amplify social biases at an unprecedented scale, they can be used to manipulate and deceive, they can automate discrimination, and they can concentrate power in the hands of a few tech giants. The “move fast and break things” approach that has characterized much of the tech industry to date is simply not adequate for managing these risks.

Additionally, the “all technology is unethical” argument fails to recognize that we do have the ability to shape the development and deployment of new technologies in accordance with our values. We’re not powerless in the face of technological progress. By proactively considering the ethical implications of AI and working to mitigate potential harms, we can create a future in which the benefits of this technology are more evenly distributed and its risks are more effectively managed.

So what might be a better analogy for thinking through the ethics of generative AI? One possibility is to compare it to the development of nuclear energy. Like AI, nuclear power is a technology with immense potential benefits – it could provide clean, abundant energy and help to address climate change. But it also comes with catastrophic risks – the potential for nuclear accidents, weapons proliferation, and long-term environmental damage.

With nuclear energy, we as a society have recognized that the stakes are too high to simply let the technology develop unchecked. We’ve put in place strict regulations, international treaties, and oversight mechanisms to try to harness the benefits of nuclear power while minimizing its risks. We’ve also recognized that some applications of nuclear technology, like weapons development, are simply too dangerous to pursue regardless of their potential strategic value.

I believe we need to take a similar approach with AI. We need robust public dialogue and democratic governance to guide the development of these technologies in accordance with our shared values. We need strict ethical guidelines and accountability measures for AI companies. And we need to be willing to put some applications of AI off-limits if the risks are too extreme – even if that means foregoing some of the potential benefits.

Another analogy that I find helpful is to think of AI like a powerful drug – one with the potential to treat diseases and enhance human capabilities, but also with the potential for addiction, abuse, and devastating side effects. With a new drug, we would never simply release it onto the market without thoroughly testing its safety and efficacy. We have an extensive system of clinical trials, FDA approvals, prescription guidelines, and post-market surveillance to try to maximize the benefits of pharmaceuticals while minimizing their harms.

We need to develop similar systems for “trialing” and “prescribing” AI technologies. We need rigorous testing and auditing of AI systems before they are deployed, ongoing monitoring of their impacts once they are in use, and clear guidelines for which applications are safe and beneficial versus which are too risky or harmful. And just as we wouldn’t accept a pharmaceutical company saying “well, all drugs have side effects, so why bother trying to make ours safer,” we shouldn’t accept AI companies downplaying the ethical implications of their products.

Ultimately, the key ethical question around generative AI is not whether it is “inherently” good or bad, but rather how we can proactively shape its development and use to align with our values and promote the wellbeing of humanity as a whole. This is a challenge that will require ongoing work from ethicists, policymakers, technologists, and the general public. But it’s a challenge we must take up with the utmost seriousness and care, because the stakes could not be higher.

The tone of the response is still bordering on twee, especially closing statements like “it’s a challenge we must take up with the utmost seriousness and care, because the stakes could not be higher”. But I didn’t mind the comparison to nuclear energy. Prompting further, I asked for some more solid theory or some particular ethicists who might have useful insights.

Claude suggested Hans Jonas, and his work The Imperative of Responsibility, which is, among other things, a treatise on the environmental impact of technology. But many of the ethicists suggested by Claude so far were male, and even the ethical frameworks it had suggested thus far were traditional and masculine approaches. I also found it personally troubling that it had chosen two approaches – the doctrine of double effect, and Jonas’ work – which have been used to justify anti-abortionist ethical stances: but Claude of course knows, cares, and understands nothing about my own attitudes or values.

The values thrust upon AI

As with every LLM, it pays to know what you’re looking for, and to prompt the model towards desired areas. The default position of Claude seems to be, like its counterparts from OpenAI and Google, basically white, male, Western, and slightly liberal (but clearly, at times, with a more conservative undertone).

One commonly criticised feature of Claude, compared to ChatGPT for example, is that its developers Anthropic have anthropomorphised the model significantly more than OpenAI. This is evident throughout the conversation, and Claude tends to respond in the first person, and as though it has actual opinions or ideas. It can be difficult at times to remember that the model is not thinking or reflecting at all, and is simply doing a sophisticated job of pattern matching and prediction. This is, in itself, an ethical concern: what do we make of technologies which have been deliberately designed to present themselves as human-like?

Teaching AI Ethics is one of six modules in my upcoming online course, Practical AI Strategies. Click here to join the mailing list and stay up to date on its release.

Ultimately, it also pays to remember that Claude and other chatbots don’t actually have values or an understanding of ethics (or anything else). While this was a more interesting conversation than I’ve had with ChatGPT on the subject, it was, nonetheless, a total fabrication. Chatbots have values thrust upon them by developers, through the dataset, and via training and reinforcement. They reflect values: and not always the individual user’s.

This is an important note to end on, since Generative Artificial Intelligence is definitely being used to develop many organisations’ ethical codes of conduct, policies (including those about GenAI), and guidance. Maybe it’s fine to use Claude or other LLMs as a tool to support the writing of such documents; just don’t expect them to have a clue what they’re talking about.

If you’d like to get in touch (with a human) about education policies and guidelines for GenAI, or discuss professional learning and consulting services, use the form below:

← Back

Thank you for your response. ✨


Leave a Reply