This post is an update to the 2023 article “Teaching AI Ethics: Power.” In that original post, I discussed how the ethical concerns of AI, from bias and discrimination to environmental impact and human labour, coalesce to reinforce and perpetuate societal power structures. I introduced the concept of hegemony to help students understand how powerful technologies can maintain and strengthen existing inequalities.
For the 2023 articles, which include still-relevant case studies and teaching ideas, check out the original series of posts:
The companies developing and deploying the most powerful AI systems in the world can now be counted on two hands: OpenAI, Microsoft, Google, Meta, Amazon, and Anthropic account for the overwhelming majority of generative AI development and deployment globally. These are not scrappy startups disrupting entrenched incumbents. They are the incumbents. Four of the six companies I just listed were already among the wealthiest corporations on Earth before generative AI existed. Anthropic and OpenAI exist only because of billions of dollars in investment from the others.
The past two years have seen these companies translate their existing wealth into unprecedented control over AI infrastructure, talent, data, and policy. The “Stargate” project alone, a joint venture between OpenAI, SoftBank, and Oracle, represents a planned $500 billion investment in AI data centres across the United States. To put that in perspective, Australia’s entire federal budget for 2024-25 was approximately $690 billion AUD. A single AI infrastructure project, controlled by a handful of private companies, now rivals the annual spending of wealthy nations.
This article examines how power has concentrated in the AI industry, what that concentration means for education and society, and how educators can help students understand and critique these dynamics.
Understanding Power and Hegemony
In the 2023 article on Power, I introduced the concept of hegemony, a term popularised by Italian Marxist philosopher Antonio Gramsci. Hegemony refers to the dominance of one group over others in society, not through force but through the establishment of norms, beliefs, and values that come to seem natural or inevitable. Hegemonic power operates most effectively when it becomes invisible, when the status quo appears to be the only possible state of affairs.
AI discourse has become thoroughly hegemonic. The idea that AI is “inevitable,” that schools “must” integrate AI tools, that workers “will” be replaced unless they “adapt,” that nations “will fall behind” unless they embrace AI development: these are not neutral observations. They are ideological positions that serve specific interests, primarily the interests of the companies selling AI products and services.
When I wrote the original article, I argued that the ethical concerns explored throughout the Teaching AI Ethics series, from bias and environmental impact to privacy and human labour, were interconnected through their relationship to power. Bias reflects whose worldview dominates training data. Environmental impact falls disproportionately on already marginalised communities. Privacy erosion enables surveillance and control. Labour exploitation extracts value from workers in the Global South.
All of these issues remain urgent. But the consolidation of power that has occurred since 2023 means that addressing any of them has become more difficult. When a handful of companies control the infrastructure, the talent, the data, and increasingly the policy frameworks that govern AI, challenging their practices requires challenging concentrations of wealth and influence that rival nation-states.

The Scale of Consolidation
The numbers are difficult to wrap your head around. In October 2025, Microsoft announced that its investment in OpenAI was valued at approximately $135 billion, representing roughly 27 percent of OpenAI’s restructured public benefit corporation. Microsoft has invested more than $13 billion in OpenAI since 2019 and has embedded OpenAI’s technology throughout its product ecosystem, from Bing to Microsoft 365’s Copilot.
Amazon has invested $8 billion in Anthropic, making it the largest external investor in OpenAI’s main competitor. Amazon Web Services is Anthropic’s “primary cloud and training partner,” providing the infrastructure on which Claude is trained and deployed. Amazon has built a custom AI chip, “Trainium”, specifically to support Anthropic’s development and reduce costs compared to standard GPUs.
Google has also invested approximately $3 billion in Anthropic, including a new $1 billion investment in January 2025. In October 2025, Google Cloud signed a deal worth tens of billions of dollars to provide Anthropic with up to one million of its custom Tensor Processing Units.
Meta planned to spend $70 to $72 billion on AI infrastructure in 2025 alone, including data centres capable of housing more than 1.3 million GPUs. In June 2025, Meta acquired a significant stake in Scale AI for $14.3 billion and hired its founder to lead Meta’s new Superintelligence Labs. They also partnered with image and video generation platform Midjourney in 2025, adding their capabilities to the Meta AI apps.
To put these figures in context: the combined AI infrastructure investments announced by just these four companies in 2025 would exceed the GDP of most countries. A December 2025 analysis identified the ten largest AI deals of that year; the top three alone, Stargate ($500 billion), Microsoft-OpenAI consolidation (~$300 billion), and Nvidia’s partnership with OpenAI (~$100 billion), totalled approximately $900 billion.
Case Study: OpenAI, Microsoft, and the $135 Billion Partnership
The relationship between OpenAI and Microsoft illustrates how AI power consolidates through interdependence. Microsoft’s investment in OpenAI began in 2019, when the company provided $1 billion to what was then still nominally a nonprofit research organisation. As OpenAI developed increasingly capable models, Microsoft embedded that technology throughout its product line, integrating GPT into Bing, Office, GitHub Copilot, and Azure.
By 2025, OpenAI and Microsoft were so intertwined that their partnership attracted the attention of competition regulators worldwide. The UK Competition and Markets Authority investigated whether Microsoft had acquired de facto control over OpenAI. The European Commission conducted a similar review. Ultimately, regulators concluded that Microsoft had not formally acquired control, but the investigation revealed the extent of Microsoft’s influence: according to the CMA, “Microsoft has acknowledged during the course of [the] investigation that it has held the ability to materially influence OpenAI’s policy since 2019.”
In October 2025, the partnership was restructured. OpenAI completed its conversion from a nonprofit-controlled “capped profit” company to a public benefit corporation with a nonprofit board retaining control. Microsoft’s investment was valued at $135 billion. Under the new agreement, Microsoft retains intellectual property rights over OpenAI’s models and products through 2032. OpenAI committed to purchasing $250 billion of Azure services, though Microsoft gave up its exclusive cloud provider status.
Most significantly, the new agreement established that if OpenAI claims to have achieved Artificial General Intelligence (the threshold at which, under the original agreement, Microsoft would lose access to OpenAI’s technology), that claim must be verified by an independent expert panel. This resolved what had reportedly been a major source of tension: Microsoft was worried that OpenAI could prematurely declare AGI to terminate Microsoft’s access.
The restructuring demonstrated both the depth of the entanglement and its limits. Microsoft gained security and equity appreciation. OpenAI gained operational autonomy and the ability to work with other cloud providers. But the fundamental dynamic remained: one of the world’s largest technology companies holds a 27 percent stake in and IP rights over what is arguably the most influential AI company in the world.

Case Study: Amazon, Google, and the Battle for Anthropic
While Microsoft pursued OpenAI, Amazon and Google competed for influence over Anthropic, the AI company founded by former OpenAI research executives. The competition illustrates how multiple pathways can lead to the same outcome: power concentrated in the hands of existing tech giants.
Amazon’s first investment in Anthropic came in September 2023, when it acquired a minority stake with an initial $1.25 billion. By November 2024, Amazon had invested a total of $8 billion. The investment made Amazon Web Services Anthropic’s “primary cloud and training partner.” Anthropic committed to using Amazon’s custom Trainium chips to train and deploy its largest models, giving Amazon not just financial return but deep integration into Anthropic’s technical infrastructure.
Google’s path was different but equally significant. Google invested $2 billion in Anthropic in 2023 and another $1 billion in January 2025. In October 2025, Google Cloud signed a deal to provide Anthropic with up to one million custom Tensor Processing Units, a commitment worth tens of billions of dollars. The deal will deliver over a gigawatt of AI compute capacity by 2026.
The result is that Anthropic, often positioned as a more “safety-focused” alternative to OpenAI, is deeply enmeshed with both Amazon and Google. Anthropic maintains what it calls a “multicloud architecture,” running its models across Google TPUs, Amazon Trainium chips, and Nvidia GPUs. This diversification provides some operational independence, but it also means that whichever cloud provider you choose, your Claude usage flows through and generates revenue for one of the world’s largest technology companies.
In September 2025, Anthropic completed a funding round valuing it at $183 billion. Alphabet and Amazon both reported billions in gains from their stakes. For the tech giants, Anthropic represents both a strategic investment and a hedge: if OpenAI falters, their money is on the leading alternative.
Stargate and the Infrastructure Arms Race
No project illustrates the scale of AI power consolidation more clearly than Stargate. Announced at the White House on January 21, 2025, the Stargate Project represents a $500 billion investment in AI infrastructure over four years, with $100 billion deployed immediately.
The initial partners were OpenAI, SoftBank, Oracle, and MGX (an investment firm from Abu Dhabi). SoftBank took financial responsibility, with CEO Masayoshi Son as chairman. OpenAI took operational responsibility. Technology partners included Microsoft, Nvidia, Oracle, and Arm.
The project moved faster than expected. By September 2025, the flagship data centre in Abilene, Texas was operational, with Oracle Cloud Infrastructure and racks of Nvidia chips already running early training workloads. Five additional sites were announced across Texas, New Mexico, Ohio, Wisconsin, and Michigan. By October 2025, the project had reached over 8 gigawatts of planned capacity and more than $450 billion in committed investment.
Ten gigawatts of AI computing capacity is enough electricity to power roughly 7.5 million homes. A single Stargate campus would require more power than some small cities. The environmental implications alone are profound: these data centres will consume electricity at rates that will significantly impact regional and national power grids.
But beyond the technical specifications, Stargate represents something more fundamental: the privatisation of AI infrastructure on a scale that will shape the technology’s development for decades. When a single company or consortium controls the physical infrastructure on which AI is trained and deployed, they control the pace and direction of AI development. They determine which research gets done, which models get trained, which applications get prioritised.
The project was announced alongside President Trump at the White House, framed as a “patriotic” investment in American competitiveness against China. Whether you consider that framing accurate or cynical, the political alignment illustrates how AI companies have cultivated relationships with government that blur the line between public policy and private interest.
Subscribe to the mailing list for updates:
When Tech Companies Write Policy
The consolidation of AI power extends beyond infrastructure and investment into policy and governance. In 2025, the Trump Administration issued a series of executive orders and policy directives explicitly designed to limit state-level regulation of AI and establish what it called a “minimally burdensome national standard.”
The most significant was an Executive Order issued on December 11, 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order directed the Attorney General to establish an “AI Litigation Task Force” to challenge state AI laws deemed inconsistent with federal policy. It directed federal agencies to evaluate state laws and potentially condition discretionary grants on states refraining from enacting or enforcing “conflicting” AI regulation.
The order built on earlier efforts. A provision in the administration’s signature tax legislation, known as the “Big Beautiful Bill,” would have imposed a five-year moratorium preventing states from regulating AI. That provision was ultimately struck from the bill in a 99-1 Senate vote due to procedural issues and bipartisan concerns about erosion of state authority. But the December Executive Order pursued similar goals through administrative rather than legislative means.
The policy direction was clear: make it easier for AI companies to operate without state-level constraints. The stated rationale was regulatory consistency, the argument that fifty different state regimes create compliance burdens that stifle innovation. But critics noted that the states being targeted, particularly California, were attempting to impose transparency requirements, safety standards, and anti-discrimination provisions that the AI industry opposed.
Just two days before the Executive Order was issued, a bipartisan coalition of 42 state attorneys general sent a letter to major AI companies urging improved safeguards for children and mitigation of harmful content. The letter illustrated the tension: states were attempting to protect their residents from AI harms, while the federal government was attempting to prevent states from doing so.
The intersection of AI power and education policy has also become increasingly direct. In July 2025, OpenAI released an “Economic Blueprint” for Australia, a 15-page document containing “recommendations” and “advice” for Australian government policy across multiple sectors, including education.
I wrote about this in detail at the time, and my concerns have only grown. The Blueprint’s education section began from a deficit view, citing declining PISA and NAPLAN scores to establish that Australian education was failing. The solution, naturally, was AI: “AI offers a powerful way to help reverse this decline.” OpenAI provided no evidence for this claim because no evidence exists.
The Blueprint’s policy proposals included recommendations that Australian education authorities were already implementing, suggesting that OpenAI had not actually researched what existed. But research was not the point. The point was to insert OpenAI into policy conversations, to establish the company as a legitimate voice in discussions about Australian education despite having no expertise in Australian education.
This pattern repeated globally. In the US, the June 2025 “Pledge to America’s Youth: Investing in AI Education” was signed by over 60 organisations, more than half representing big tech companies or their venture capital funders. The signatories included Accenture, Adobe, Amazon, Apple, Google, HP, IBM, Intel, Meta, Microsoft, NVIDIA, OpenAI, Oracle, Qualcomm, Salesforce, and others. These companies have taken it upon themselves to determine the future of AI in American education.
In August 2025, OpenAI partnered with Instructure, the company that owns the Canvas Learning Management System, to embed ChatGPT functionality directly into the most widely used LMS in higher education. The partnership makes strategic sense for both companies: Instructure gains AI features to compete with rivals, while OpenAI gains access to millions of students and teachers who are often required by their institutions to use Canvas.
OpenAI also released “study mode”, a watered-down version of ChatGPT marketed as an educational tool that would refuse to give direct answers. In practice, the feature was pedagogically questionable and often failed to work as advertised. But its existence gave OpenAI talking points: “We’re not undermining education, we’re helping students learn.”
The pattern is familiar from earlier waves of educational technology. Companies with no educational expertise produce products of questionable pedagogical value, then use marketing, partnerships, and policy influence to embed those products in schools regardless of evidence about their effectiveness. The difference with AI is scale: the companies involved are among the wealthiest in human history, and their products are more deeply integrated into teaching and learning than any previous technology.
What Has Changed Since 2023?
Since the original 2023 article, several significant developments have shifted the landscape:
First, the scale of investment has exceeded all predictions. When I wrote the original article, large AI investments were measured in billions. Now they are measured in hundreds of billions. The $500 billion Stargate project alone represents an investment larger than the GDP of most countries. This scale changes everything: the barriers to entry, the political influence of AI companies, the environmental footprint, the concentration of talent.
Corporate structures – which in 2023 were more flexible with the competition from startups and open source applications – have crystallised further. The ambiguity about OpenAI’s nonprofit status has resolved into a public benefit corporation with clear ownership stakes. Microsoft owns 27% of OpenAI. Amazon and Google collectively hold major stakes in Anthropic. The era of AI “labs” operating outside normal corporate structures is over.
Government-industry alignment has also deepened. The Trump Administration’s approach to AI policy has been explicitly pro-industry, from the January 2025 executive order removing AI regulatory barriers to the December 2025 order attempting to preempt state regulation. AI companies have cultivated this alignment through policy proposals, White House appearances, and promises of American competitiveness.
Teaching Power and AI
In the original 2023 collection, each article ended with a selection of ideas for teaching the issue in the context of existing curriculum areas. These 2026 updates similarly align concepts from the articles to standards from typical curricula across the world, and in particular the Australian, UK, US, and IB curricula. For readers teaching in Higher Education, these examples will also be suitable across a wide range of disciplines.
My key point remains that we do not need specialised “AI literacy” classes to deliver quality instruction on AI ethics. We already have the expertise we need in schools and universities.
English
English students analyse how language constructs meaning, identity, and power relations. AI discourse provides rich material for exploring how corporate interests become “common sense.” Students might ask “How does the language of ‘inevitability’ shape how we think about AI?”, “Whose voices are amplified in AI discourse, and whose are marginalised?”, or “How do AI companies use language to position themselves as educators, policymakers, or public servants?” Analysis of corporate communications, policy documents, or media coverage can reveal rhetorical strategies at work.
Economics / Business Studies
Economics students study market structures, competition, and the distribution of wealth. AI consolidation raises fundamental questions: “What happens to competition when a handful of companies control critical infrastructure?”, “How do network effects and economies of scale create barriers to entry in AI?”, or “Who benefits from the productivity gains promised by AI, and how are those gains distributed?” Students could analyse the financial structures of AI partnerships, examine investment patterns, or model the economic implications of AI-driven automation.
History
History students examine how technological change intersects with political and economic power. AI consolidation can be contextualised within broader patterns: “How does AI consolidation compare to earlier technological monopolies, such as Standard Oil or AT&T?”, “What role have governments historically played in either enabling or constraining technological monopolies?”, or “How have concentrated technologies, from the printing press to television, shaped political discourse and power?” Students could research historical parallels and analyse what lessons apply to the current moment.
Digital Technologies / Computer Science
Computer science students study systems architecture, design choices, and their societal implications. The infrastructure of AI raises technical and ethical questions: “What are the technical barriers to entry in frontier AI development?”, “How do choices about chip architecture, training data, and deployment infrastructure concentrate or distribute power?”, or “What alternative models of AI development might distribute power more broadly?”
Geography / Environmental Studies
Geography students study resource distribution, environmental impact, and spatial patterns of development. AI infrastructure has profound geographical implications: “Where are AI data centres being built, and why?”, “How do AI infrastructure projects affect local communities, water resources, and power grids?”, or “What are the environmental justice implications of AI’s energy consumption?” Students could map AI infrastructure, analyse environmental impact assessments, or examine how the costs and benefits of AI development are distributed geographically.
Civics and Citizenship
Civics education addresses rights, governance, and the relationship between citizens and institutions. AI power concentration raises fundamental democratic questions: “What happens to democratic governance when private companies have more resources than nation-states?”, “How should governments regulate industries that are attempting to write their own rules?”, or “What rights do citizens have to challenge AI systems that affect their lives?”
Legal Studies
Legal Studies students examine how law responds to new technologies and concentrations of power. AI raises novel legal questions: “How do existing antitrust frameworks apply to AI partnerships?”, “What legal mechanisms exist to challenge AI-related harms?”, or “How do AI companies use legal structures, from nonprofit conversions to cross-licensing agreements, to consolidate power?”
Politics / International Relations
Politics students study power, influence, and the relationship between states and non-state actors. AI consolidation complicates traditional frameworks: “How do AI companies exert influence on government policy?”, “What are the geopolitical implications of AI infrastructure concentration in the United States?”, or “How does the US-China AI competition shape domestic and international policy?”
Philosophy / Ethics
Philosophy students explore questions of justice, power, and moral responsibility. AI consolidation raises fundamental issues: “Is it possible for AI to be developed ethically under conditions of extreme power concentration?”, “What obligations do the powerful have to those affected by their decisions?”, or “How does the concept of hegemony help us understand the ‘inevitability’ narrative around AI?” Students could engage with Gramsci’s theory of hegemony or examine the ethical frameworks AI companies claim to follow versus their actual practices.
Theory of Knowledge (IB)
TOK students examine how knowledge is produced, validated, and distributed. AI power concentration affects knowledge itself: “Who gets to define what AI is and what it should do?”, “How does funding shape AI research priorities and findings?”, or “What happens to knowledge production when a handful of companies control the most powerful tools?” These questions connect to TOK themes about the relationship between power and knowledge, the role of institutions in knowledge production, and the ethics of knowledge systems.
Obviously this is a non-exhaustive list of ideas, and although I am an English and Literature teacher myself, I am certainly not a subject-matter expert in every domain! If you have other ideas or ways you have taught about AI and power, then please use the contact form at the end of this post to get in touch.
Note on the images: In the 2023 version of Teaching AI Ethics I generated images in Midjourney. For these updated articles, I have sourced images from https://betterimagesofai.org/. Better Images of AI includes an excellent range of photos, illustrations, and digital artworks which have been generously licensed for commercial and non-commercial use.
Cover image for this article: Jamillah Knowles & We and AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
All articles in this series are released under a CC BY NC SA 4.0 license.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply