Everyone's sharing their Claude setup right now.
Their skill stacks. Their prompts. Their workflows. Their 'second brains.'
Twenty minutes on LinkedIn and you can have someone else's entire system running in your Cowork. All you have to do is comment: "Send me the full setup." and you're done.
Sharing your system in exchange for comments is a smart play if you're the poster. Followers. Reach. Engagement. The algorithm rewards it. They're doing exactly the right thing for their business.
The trap is on the receiving end. A quick fix always looks better than it is.
I'd hold off.
If you don't build a system yourself first, you won't understand how it works.
And if you don't understand how it works, you can't train it. You can't adapt it. You can't troubleshoot it. You can't even tell what it's doing when it starts behaving strangely.
And it will.
But more importantly: you won't build the muscle.
A plug-and-play system won't develop the skills you actually need:
how to think in systems
how to break a problem into a workflow
how to engineer context, not just write prompts
how to spot when a system drifts
how to shape a system around your needs instead of someone else's
Those skills only come from building. From testing things that don't work. From breaking your own setup. From fixing it. From noticing patterns over time.
You can't outsource that part. A week of building teaches you more than a year of installing.
You start understanding:
how context flows through skills
how routines break when inputs change
how references drift over time
how to debug things when they quietly go wrong
Because they quietly go wrong all the time.
Then you can borrow, copy, steal ideas.
Once you understand the mechanics, someone else's stack stops being a mysterious black box and becomes a parts shelf. You can see what's useful, what's overengineered, and what only works because it was built around their brain.
That applies to the next layer too - which brings me to something I built this week.
Meet Tom
My system has now grown beyond what I can reliably hold in my head. (And if anyone has tips for increasing mental RAM after 38, I'm listening.)
After months of daily use, I started noticing small cracks:
references pointing to pages that no longer existed
notes contradicting each other
outdated knowledge lingering quietly in the background
Nothing catastrophic. Just slow entropy.
So I hired a gardener.
Tom walks through the entire system once a week - Saturday nights, while I'm asleep - looking for weeds, dead links, stale knowledge, and contradictions I'd never spot myself.
He's built like Tom Hardy. Big biceps. Five o'clock shadow. Almost as handsome as my husband. (Ben, I know you're reading this.)
I don't trust Tom yet. He's new around here.
So even when he finds something that looks obviously wrong, he's not allowed to change anything himself. He proposes. I decide.
The cost of an overenthusiastic AI "cleanup" is too high.
So every Sunday morning I open his report with a coffee, review his recommendations, approve a few, reject a few, and keep the system healthy.
His findings this week were gloriously boring:
stale references
soft contradictions
items needing clarification
Boring is the point. Maintenance work always is.
The only reason I can review Tom's suggestions intelligently is because I built the system myself.
I know what each page is for. I know which references matter. I know where the source of truth lives.
If I'd just downloaded someone else's setup, Tom would be another black box pointing at other black boxes.
His guess would be as good as mine.
So if you're starting out
Build your own thing first.
Small is fine. Messy is fine. The thinking is the work. That’s how you’ll learn.
Because the asset isn't just the system. It's the capability you build while creating it.
Once you've built enough to understand it, then start borrowing from everyone else.
And when your system eventually grows beyond your own mental capacity, hire your own Tom. And make him/her look however you want 😄
Quote of the week
Tell me and I forget. Teach me and I remember. Involve me and I learn.
What’s new in AI this week:
Anthropic doubled rate limits for Pro and Max users, and killed peak-hour throttling. If you've ever hit the "you've reached your limit" wall mid-task, that's now twice as far away, and the after-school slowdown is gone.
Google and Meta are racing to ship "personal AI agents" this year. Google's is called Remy. Meta's is called Hatch. The pitch is the same - an AI that knows your email, calendar, contacts, and acts on your behalf. Worth watching how they actually land once they're in real hands.
Anthropic is researching something called "dreaming." It lets agents review their own past behaviour between sessions and find patterns to improve next time. Currently a research preview, not in production. File this under "the future of Tom" - right now my gardener finds problems and I decide what to do; a dreaming agent would propose better gardening for itself, by reviewing what worked. We're a way off, but the trajectory matters if you're building anything long-running.
Less in your head, more in your life ❤️
See you on Thursday, Sarah xx