The rapid rise of artificial intelligence hasn’t happened in a vacuum. Its growth has been fuelled in many ways by a specific source: social media. This isn’t just a story of technological progress – it’s a tale of data, addiction, power, and a deliberate blurring of the lines between human and machine relationships.
The Birth of the Dataset
Although artificial intelligence has been an official field of study and research since the 1950s, this particular story starts in the 2000s with the explosion of social media. Facebook, Twitter, and later Instagram and TikTok, started as ways to connect with friends and colleagues, and a new way to interact with the media. Over time, they transformed into huge repositories of human behaviour, preferences, and interactions. Social media also contributed to the rise of internet use in general, and to the “web 2.0” ideas of users as creators. This data, freely given by billions of users, would contribute to the growth of the internet and the lifeblood of modern AI systems.

Consider Meta (formerly Facebook), with a staggering 2.9 billion monthly active users. This massive dataset of human communication and behaviour, combined with Meta’s considerable financial and computational resources, has positioned the company as a powerhouse in AI research and development. The recent release of Llama 3.1, Meta’s open-source large language model, is a direct product of this advantage. Although Meta claims they do not train their systems on private posts, they have certainly been training on publicly shared materials from platforms like Facebook and Instagram since 2007.
But Meta isn’t alone in the data gold rush. ByteDance, the parent company of TikTok, has leaned into its expertise in creating engaging (addictive) user experiences to launch several AI apps, including Cici AI, Coze, ChitChop, and BagelBell. Along the way, they have been accused of using OpenAI’s technology to help train their own competing Large Language Model.
These developments help underscore a crucial point: only big tech companies, with their vast data reservoirs and resources, have the capacity to build truly powerful AI systems. The implications of this are profound, raising questions about the concentration of power in the tech industry and the potential for monopolistic control over AI development.
The Addiction Algorithm
As well as collecting phenomenal amounts of data, social media platforms have also perfected algorithms that keep users engaged for hours, often at the expense of their wellbeing. Now, as these companies pivot towards AI, there’s a real risk that these same addictive principles will be applied to AI systems.
Recent research from MIT Technology Review explores a growing trend: AI companionship is no longer just the topic of Sam Altman’s favourite science fiction movies. Analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing. Chatbot-based platform character.ai claims users are logging in for an average of two hours a day, and the Google-backed startup has just been all but (re)acquired by Google itself. Many people are already inviting AIs into their lives as friends, lovers, mentors, therapists, and teachers.
The allure of AI companions lies in their seeming ability to identify our desires and serve them up to us whenever and however we wish. Unlike human interactions, AI can endlessly generate realistic content optimised to suit the precise preferences of whoever it’s interacting with (or at least, that’s the product we’re being sold). This creates an echo chamber of affection that threatens to be extremely addictive.
As I’ve argued before, chatbots aren’t the future of AI in education. But the addictive potential of these technologies raises serious questions about their role in our lives and society more broadly.

The Silicon Valley Effect
The story of AI’s rise is incomplete without mentioning the Silicon Valley ecosystem that nurtured it. Sam Altman, the controversial figure at the helm of OpenAI, is a product of this environment. His journey from Y Combinator to OpenAI illustrates how the ethos of the tech world, including its roots in social media startups, has shaped AI development.
Altman’s approach to AI development at OpenAI, including the pursuit of rapid growth and disruptive innovation, likely bears the imprint of his experiences in the social media-dominated Silicon Valley startup world. The recent drama surrounding his firing and reinstatement at OpenAI underscores the complex dynamics at play in the AI industry, where egotistical leadership often clashes with ethical considerations and corporate governance.
This pattern of behaviour isn’t unique to Altman. It reflects a broader trend in Silicon Valley, where the “move fast and break things” mentality of social media startups has been applied to the development of powerful AI systems. The potential consequences of this approach are far-reaching and potentially dangerous.
Elon Musk’s Social Media Playground
No discussion of social media and AI would be complete without mentioning Elon Musk and his acquisition of Twitter (now X). Musk’s influence over X and his simultaneous involvement in AI development through companies like xAI create a troubling mix of social media reach and AI capabilities.
The recent introduction of an AI image generation feature on X, with minimal safeguards, exemplifies the potential dangers of this combination. The ease with which users can create and spread misleading or offensive content raises serious concerns about the platform’s role in disseminating misinformation, demonstrating how social media platforms can become testing grounds for potentially harmful AI technologies.
This development is particularly concerning given X’s global reach and influence, and Musk’s own notorious attitude towards sharing and even promoting misinformation under the guise of “disruption”. The platform’s potential to rapidly spread AI-generated misinformation could have significant societal and political implications, underscoring the need for robust safeguards and ethical guidelines in the deployment of AI technologies on social media platforms.
The Future of AI and Social Media
It’s clear to anyone paying attention that the relationship between social media and AI will only grow stronger. The vast datasets accumulated by social media platforms will continue to fuel AI development, including increasing forays into the collection of multimodal data. AI systems will increasingly shape our social media experiences and beyond.
In articles on “Digital Plastic”, I’ve discussed the proliferation of AI-generated content risks creating a kind of “digital pollution” that could fundamentally alter our online experiences and information ecosystems.
The key to handling this future lies in open dialogue between tech companies, regulators, and the public. We need transparent AI development practices, robust regulatory frameworks, and increased digital literacy among users. As the MIT Technology Review article suggests, we may need to consider innovative regulatory approaches, such as embedding safeguards directly into technical designs or implementing dynamic policies that adapt to users’ mental states.
Above all else, we cannot allow the tech companies to write their own rules: it took ten years of social media before we realised what a huge mistake that was.
Get in touch to discuss Generative AI professional learning, consulting, and advisory services:
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply to The Right to Resist – Leon FurzeCancel reply