OK this one is a bit technical, but even if you couldn’t care less about code I want you to stick with me until the end…
Over the past few years, I’ve written a lot about what I want AI to be able to do. For example, in mid-2024, I wrote about trying to make a project management tool with Claude and Todoist. The idea was to use Claude’s coding capabilities to run a little code widget that could automatically add and complete tasks, manage projects and labels, and so on. It worked… sort of. Overall it was too heavy of a lift, and it would have required constant tweaks and updates that I couldn’t be bothered to maintain.

I’ve also written about how I use GenAI to make little useful automations, and to write code that scrapes this blog to create an accessible backup version of all my posts. In short, I want AI that does useful, tedious, administrative stuff, and that helps me to centralise all of my various applications so that I don’t have to jump all over the place and interrupt my work.
Instead, what we’ve been given is hype, “erotic chatbots”, shopping recommendations and now, targeted ads.
But it doesn’t have to be this way. One of the most useful developments in GenAI infrastructure over the past twelve months has been the emergence of MCP: the Model Context Protocol. If you’ve been using Claude’s desktop app, ChatGPT Plus, or similar applications, you may have noticed “connectors” appearing that let the AI interact with external services like Todoist, Canva, or your email. MCP is the technology that makes this possible.
I spent a few hours on the weekend building my own MCP servers to connect Claude to my private data, and the experience taught me a lot about where this technology is heading and why it matters.
What is MCP?
MCP stands for Model Context Protocol. At its simplest, it’s a standardised way for AI assistants to communicate with external data sources and tools. Think of it like a universal translator that lets Claude (or other AI applications) talk to your databases, your CRM, your content management system, or really any software that has an API.
The protocol was developed by Anthropic and released as an open standard, meaning anyone can build MCP servers for their own applications. It’s deliberately simple and a lot of software engineers are amazed that it’s become so hyped: it’s essentially JSON messages passed back and forth between the AI and your local server. Nothing flashy.
But despite its simplicity, MCP offers a few huge benefits over my “circa 2024” way of doing things. Instead of copying and pasting data into a chat window, or uploading files manually, you can give the AI model direct (but controlled) access to your systems. The AI can query a database, search documents and folders, and take actions like creating drafts, all within a single conversation.
Why Build Your Own?
The pre-built connectors that come with applications like Claude’s desktop app are useful, but they’re generic. They connect to services that millions of people use: Gmail, Todoist, Canva. What if you want to connect to your own private database, or something like a folder of transcripts on your local machine?
That’s why I chose to try building a custom MCP server. It’s possible to build a server that exposes just the tools you want, with exactly the level of access you’re comfortable with, including read-only if you prefer. The AI calls the tools, the tools talk to your data, and the results are passed back into the conversation.
The security of MCP has been problematic since the start, however, and it’s worth pointing out that just over a year on from its release MCP still has some red flags, including some high profile data breaches. This is not something I’d recommend an enthusiastic amateur (like me) carries out in an enterprise environment.
But in my setup, there are a few more security features. My MCP server runs locally on my machine. There’s no cloud, no network port exposed to the internet. Just a subprocess that Claude’s desktop application launches and communicates with via standard input and output. My data stays on your machine, my credentials never leave the environment, and I control exactly what the AI can and cannot do. Hopefully…

A Weekend Project
I won’t pretend building the MCP servers was trivial, but it’s certainly achievable for anyone with a bit of technical comfort. My weekend project involved connecting Claude to an Airtable client database, my Todoist app, Gmail, and WordPress. The goal was simple: I wanted Claude to help me cross-reference new email contacts against my existing client list, and I wanted to push formatted drafts directly from Claude to my blog as drafts ready for review. I also wanted to be able to add and complete tasks in Todoist related to both my client data and my blog.
The actual building was straightforward. The MCP specification is well-documented, and Anthropic provides SDKs for Python and Node.js that handle all the protocol details. You’re essentially writing little wrap-around functions: “when Claude calls search_clients, query the database and return the results.”
What wasn’t straightforward was all the fiddly configuration around it. Here’s a taste of what the weekend actually looked like. At this point, feel free to skip to the end if you want to gloss the technical stuff.
Trial and Error
Let’s get the rookie “I’m obviously not a coder” stuff out of the way. Copying configuration examples from the documentation introduced invisible characters, like curly quotes from formatted text where plain quotes should be. The JSON validator was not impressed, and neither was I when I realised I’d been staring at “smart quotes” for twenty minutes.
Field names matter in databases, apparently. My database has fields like “Email Address” and “First Name”, but I’d blindly copied example code that expected “Email” and “Name”. The server connected fine but returned nothing. A classic case of the code doing exactly what I told it to do, rather than what I meant. Stupid code.
Once past the configuration hurdles, the actual functionality was genuinely impressive. Asking Claude to “list ten random clients from my database” and watching it actually query my database and return results felt like a small magic trick after years of “chatbot in a box” AI.
The voice-to-blog workflow I’d imagined came together nicely. I can dictate articles using a transcription app, have Claude format and tidy them, then push the result straight to WordPress as a draft. What previously involved manual copying between three or four applications (Otter, Claude, sometimes ChatGPT, WordPress) now happens in a single conversation.
Cross-referencing data sources proved immediately useful. “Check my recent emails against my client list and identify any contacts I should add” is exactly the kind of tedious task that AI assistants should be handling, and with MCP it can actually see both data sources. It was able to correctly flag half a dozen emails from the last 48 hours where I did not have an entry in Airtable for the client. It also found three out of date entries with contacts who have changed roles – something I didn’t explicitly ask for.

The Over-Engineering Trap
I also learned something about knowing when not to use MCP.
My original plan included an MCP server to “watch” a folder of transcripts and pull them into conversations automatically. I’d sketched out the code, planned the configuration, and was about to start building when a thought struck me: Claude Desktop application can already read files from my filesystem directly. I didn’t need a server to watch a folder. I could just… point Claude at the folder.
This is an easy trap to fall into. MCP is a hammer, and suddenly everything looks like a nail. But the technology is specifically useful when you need API access, authentication that Claude can’t handle natively, or custom logic beyond file reading. For “read files, apply my standard prompt, output formatted text”, you already have everything you need.
Security Considerations
I’ve alluded to security a few times, and it’s worth addressing directly. The MCP model has some sensible defaults:
The server runs locally as a subprocess. There’s no network exposure unless you explicitly create it. The communication happens via standard input/output, which means nothing is listening on a port that could be probed or attacked.
You control which tools are exposed. If you only want read access to your database, you only write read functions. If you want Claude to create records, you add a create function. The AI can’t do anything you haven’t explicitly permitted.
However, in this setup, your credentials end up in configuration files on your local machine. Anyone with access to your user account (or your backups) could read those credentials. For most personal setups this is acceptable: you’re trusting your own machine’s security. But it’s worth being aware of, and if you’re particularly security-conscious, there are ways to use system keychains instead.
One final note: if you’re thinking of deploying MCP servers for others to use, the security model changes significantly. Remote MCP servers introduce authentication challenges, network exposure, and hosting considerations that go well beyond a weekend project. For personal, local use the story is simple. For anything else, proceed with appropriate caution. Honestly, I still think the protocol isn’t quite mature enough for an enterprise setting and especially in a high risk setting like education.

Where this is Heading
Having played around for a few hours, the broader trend here is clear: AI assistants are evolving from isolated chat interfaces into something more like intelligent operating systems. Applications that can see your data, take actions on your behalf, and integrate into existing workflows.
I’m not even going to write about Moltbot (aka ClawdBot aka OpenClaw): the viral “agent” going wild on social media – including its own social media platform called Moltbook…

This raises all sorts of questions about trust, control, and the boundaries we’re comfortable with. I’m instinctively cautious about giving any AI applications too much autonomy, particularly when it comes to actions that are difficult to undo. But the read-and-summarise use cases, the cross-referencing of data sources, the formatting and routing of content: these feel genuinely useful.
Eventually, all of this will happen at the click of a button. We’re already used to APIs behind the woodwork: I’m sure every one of you has used Single Sign On via Google or Microsoft to connect to applications together at some point. MCP is really just an extension of that, built expressly for GenAI. It’s only a small step from the APIs that give Canva access to your Google Sheets, or those which allow you to transfer data between a student admin system and an LMS.
But the introduction of a genuinely useful GenAI model like Claude Opus 4.5 into this loop feels very different. As the technology matures, and as security issues are handled more directly by developers, here’s a few things I can image we’ll see in education:
- Cross-referencing student data across platforms: Imagine an AI assistant that can query your student information system, LMS, and communication tools simultaneously. A teacher could ask “Which students in Year 10 English have missed more than three classes this term and haven’t submitted their essay?” and get an instant answer that would otherwise require pulling data from Compass, Canvas, and the pastoral care system separately.
- Intelligent timetabling and resource allocation: School timetables are notoriously complex. An AI with MCP access to room bookings, teacher loads, curriculum requirements, and student electives could not only generate timetable options but explain the trade-offs and adjust in real-time when constraints change. Need to accommodate a last-minute excursion? The assistant could propose alternatives and flag any flow-on impacts. Importantly, this would use traditional algorithms in timetabling software to do the grunt work, and the GenAI platform to interpret the result.
- Streamlined professional development tracking: Teachers juggle compliance requirements, professional learning goals, and career progression documentation across multiple systems. An MCP-enabled assistant could monitor PD hours, flag upcoming requirements, suggest relevant opportunities based on stated goals, and even pre-fill accreditation paperwork by pulling evidence from classroom observations and completed courses.
- Parent communication workflows: The endless back-and-forth of permission slips, absence notifications, and meeting bookings could be dramatically simplified. An AI that can see the school calendar, individual student records, and email threads could draft contextually appropriate responses, schedule meetings when both parties are available, and maintain a consistent communication trail.
- Curriculum mapping and resource discovery: Teachers spend enormous amounts of time hunting for resources and ensuring curriculum alignment. An MCP-connected assistant could cross-reference learning outcomes with existing resources in the school’s content library, suggest gaps in coverage, and make recommendations based on things like curriculum updates.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply