Part 1: I’ve Been a Platform PM for 6 Years - I’m Open-Sourcing Everything I Know.
Introducing The Platform Space AI Agent
Why?
Six years is a long time to specialise in anything.
I’ve spent mine working as a Platform PM across companies at very different stages — different sizes, different levels of platform maturity, different levels of organisational buy-in for the idea that an internal platform should be treated like a product. I’ve seen what works. I’ve seen what quietly fails for years before anyone notices. And I’ve accumulated a lot of opinions along the way.
For a while, most of that lived in my head. Some of it made its way into this newsletter. But I always felt like I was sharing the surface - the articles, the frameworks, the tidy conclusions - rather than the actual thinking underneath. At one point it was also difficult for me to keep up with this newsletter.
That’s what I want to change.
A few months ago I started building a Claude agent and putting everything I know into it. It was a slow burner as I wasn’t too sure how to structure it but seeing that this is very accessible now - I got to work. Not as a productivity hack. As an attempt to genuinely open-source my knowledge - to make the frameworks, the mental models, the hard-won intuitions about platform work accessible to PMs who are earlier in this journey, and to leaders who are still figuring out why their platform team isn’t delivering the value they expected.
This is a three-part series about how I did it, what surprised me, and what I think it means for how we should all think about knowledge in platform or infrastructure work.
Let’s get into it.
The problem with Platform PM knowledge
Platform PM is one of those roles you rarely choose on day one. I was lucky to choose this as a career and I have loved every second of it.
Most people get thrown into it - handed a team, a backlog, a set of engineers who’ve been building internal tooling for years without anyone calling it a product - and expected to figure it out. There’s no obvious career path into it. There’s no bootcamp. Most of what exists is scattered across blog posts, conference talks, and the heads of the small community of people who’ve been doing it long enough to have opinions. Again - I am lucky to know some really strong ones.
I’ve watched a lot of PMs land in this space and start from scratch. Not because they weren’t talented. But because the knowledge they needed wasn’t written down anywhere. They’d ask questions I’d answered a hundred times before. Not because they hadn’t tried to find the answer - but because the answer was in my head, not on the internet.
The other thing I noticed, over six years, is how much knowledge gets lost every time someone changes roles. You build a mental model for how to get developer adoption in a resistant organisation. You figure out how to position your platform as an accelerator rather than a cost centre. You learn, the hard way, that the loudest complaints in your backlog are rarely the most important ones.
And then you move on. And some other PM starts from scratch.
That’s not how it should work.
What I’m actually building
The agent lives in a file called CLAUDE.md — a knowledge base that Claude Code reads at the start of every session. Think of it as a briefing document for an intelligent collaborator: here’s who I am, here’s how I think, here’s what I know, here’s what I care about.
Building it forced me to do something I’d never properly done before: write down everything.
Not the polished stuff. Not the article-ready frameworks. The actual thinking. The mental models I use when a problem is ambiguous. The questions I always ask before making a recommendation. The patterns I’ve seen across different organisations and different stages of platform maturity.
I think I have enough for a book! Maybe?
What ended up in the CLAUDE.md was more than I expected:
The career context that shapes how I think. Six years across four very different environments - an API platform at a startup that went through acquisition, a data and ML platform serving >10 product teams, a data science platform in a government environment where I had no authority and had and built >70% adoption through influence alone, and my current role building an AI platform strategy. Each of those contexts taught me something different about what platform PM actually is.
Real outcomes, with real numbers. £1.6M ARR created across two API platform roles. An ML deployment process reduced from four weeks to two and a half days. A 90% reduction in deployment time. 80+ models in production. This isn’t my CV - I’ve just anchored my Agent in real outcomes I led throughout my career. When I ask it to help me frame a business case or articulate value, it can draw on real precedent rather than generic advice.
The frameworks I actually use. Not the textbook versions. The ones I’ve stress-tested in difficult stakeholder meetings and adapted based on what actually worked. Phased thinking for platform investments. A risk-tiering model for GenAI governance. An adoption ladder that tracks teams through Aware → Experimenting → Shipped → Scaled. A prioritisation framework that weighs business value, developer impact, effort and risk together.
The principles and phrases that guide my decisions. The short ones that I find myself saying in almost every context. “Make the right thing the easy thing.” “Diagnose before you prescribe.” “Platform is a force multiplier, not direct output.” They’re how I cut through ambiguity quickly. They help me anchor the team and sometimes even the organisation that I am in.
Templates I use in practice. A full developer discovery interview template I’ve used to understand SDLC workflows without leading witnesses. A metrics framework for measuring engineering speed, developer happiness and compliance. An email and comms approach for different stakeholder types.
Writing it all down took time. More time than I expected. But the act of doing it was, itself, valuable - because it forced me to be explicit about things I’d only ever been instinctive about.
What have I learned so far?
Here’s something I didn’t anticipate: the process of building the agent was as valuable as having it.
When you’re writing a knowledge base for an AI, you can’t rely on implication. You can’t write “handle stakeholder conflict carefully” and expect it to understand what that means in a platform context. You have to write it out. What kind of conflict? With whom? What’s the goal? What does good look like?
That level of explicitness is hard. And revealing.
I found frameworks I’d been using for years that I’d never properly articulated. Mental models that turned out to be more nuanced than I’d realised. Principles that I thought were universal but were actually context-specific - useful in a startup environment, different at enterprise scale.
The CLAUDE.md became a mirror as much as a knowledge base.
And when I started testing the agent against real work - using it to structure a business case, draft a comms plan, think through a difficult prioritisation call - I found that where it struggled, I had been unclear. The prompting process was a feedback loop. If the output wasn’t right, the problem was usually upstream in how I’d captured the knowledge, not in the model.
That’s a useful thing to know. It means building the agent isn’t just a one-time exercise. It’s an ongoing practice of making your own thinking legible.
Why this matters beyond productivity
I want to be careful here, because I’ve seen a lot of content about AI and productivity that I find slightly hollow. The “10x yourself” narrative. The tools that promise to replace thinking rather than augment it.
This isn’t that.
What I’m building is closer to knowledge infrastructure. A way of making six years of hard-won Platform PM experience accessible — to junior PMs trying to find their footing, to senior leaders trying to understand why their platform team isn’t landing, to the version of me that joins a new company and needs to get up to speed quickly.
The agent can draft a comms plan, yes. It can help me think through a prioritisation problem. It can surface the right framework for a given situation. Those things save time. But the more interesting thing is what they represent: a set of thinking tools that aren’t locked in one person’s head anymore.
Platform PM knowledge is scarce. The community is small. Most of what gets shared is high-level. The people who’ve actually done the hard work — built adoption in resistant organisations, shipped meaningful developer experience improvements, made the case for platform investment to sceptical CFOs — tend to keep the specifics to themselves, not out of selfishness but because there’s been no good mechanism for sharing it.
This is my attempt at a mechanism.
What’s coming next
In Part 2, I’ll get into the actual craft of building the skills into the agent — what it means to give an AI a capability rather than just knowledge, why prompting is harder than it looks, and what I learned from the iterations that didn’t work.
In Part 3, I’ll talk about what I actually delegated — and what I kept. Which parts of Platform PM thinking transferred well, which parts resisted automation, and what I think that tells us about the nature of the role.
If you’re a Platform PM, I hope this series gives you something concrete to take away — and maybe the push to start externalising your own knowledge.
If you’re a senior leader, I hope it changes how you think about what your platform team knows, and what happens when they leave.
Here’s the MVP - use it and let me know if helpful. V2 is coming out next week with more skills added into it. Hit the button for feedback!
See you next week.


Really excited for this. As a tech lead on a platform engineering team without a PM, I've had to fill that role myself for a long time and honestly, I've always struggled with it. Hopefully this will make the PM hat a bit easier to wear.
Really great read Alex :)
Been working on a few similar things myself - in particular a personal knowledge base, but also us now having a shared knowledge base across engineering/product. The latter has been awesome, especially as our engineers and PMs have been driving it's adoption and usage as well.
Looking forward to part 2.