Why Your Second Brain is Dead Weight
Your Notes Aren't the Bottleneck
On April 2, 2026, Andrej Karpathy published a GitHub gist called llm-wiki and tweeted about it. Inside 5 days, the tweet had millions of views, and the gist crossed 5,000 stars. Forks multiplied. A dozen Substack writers sketched variations before the weekend was out. It is probably the most-read PKM idea of the year...
The idea: instead of feeding an LLM a bundle of raw notes at query time (classic RAG), you give it a schema file and a folder of sources, and let it maintain a wiki as a first-class artifact; the model does all the reading and writes the entries, and keeps the cross-links current. Your mission (should you choose to accept it) is adding sources and curating the schema. By his count, Karpathy’s own wiki reached about 400,000 words across roughly 100 articles.
The model removes the single worst failure mode of a decade of PKM tools, which was assuming the human would come back to process their clippings. Humans don’t process them. They clip and move on. The wiki pattern takes the human out of the loop at the step they were always going to fail, and hands that step to a system that doesn’t get tired, and it doesn’t bored.
If you already have a defined corpus, this is an upgrade. A researcher with a subfield or an engineer with an internal codebase can put it to work tomorrow - a company with a closed archive can deploy the same pattern with a few tweaks.
For a solo operator in 2026, I think it imports a bet from 2020 that the world has already outrun.
The caveat: Karpathy is a great deal smarter than me.
I’m going to disagree with him anyway.
Whether that’s brave or dumb I’ll find out in the replies.
The bet is that value lives inside your archive. This was obviously true in Ye Olden Dayes. Models couldn’t browse the internet, retrieval was brittle and slow and so the public web was outside the model’s reach at query time. If you wanted an LLM to say anything useful about your specific world, you had to dam up a little pond of your own sources so it could swim in it.
Every second brain tool built between before 2023 rests on that.
But we’re in a world where Claude can read Wikipedia in real time, pull down an arXiv paper in seconds, and summarize any URL you throw at it before your coffee cools; it can weave six disconnected domains into a single answer; the pond is irrelevant when the model is already swimming in the ocean.
So the first question to ask about any new PKM pattern in 2026, Karpathy’s included, is: what specifically does my archive contain that the open web doesn’t?
Your folder of clipped Atlantic articles? Claude can read The Atlantic. Your highlights from Kahneman? The model has read Kahneman multiple times, in translation and out of it. Your Readwise export from the last 5 years of nonfiction? 99% of it is other people’s writing that the model has better access to than you do, with no random paste errors from your phone. Your book summaries? Every bestseller has 40 summaries on the open web and the model has read all of them.
What’s yours and not in the training set? Your call transcripts; your contracts and drafts; your pricing and your client roster’ your decisions and the reasoning you used to make them; your working files on current projects; and maybe a private journal.
This list fits in one folder. It’s measured in megabytes, and any plain-text editor can open it. It needs no graph view, no bi-directional links, no AI-powered semantic search plugin, no monthly feature release.
Which is almost certainly why nobody is selling it to you.
The PKM industry has done a clever thing with this asymmetry. It takes the one set of materials LLMs can’t help you with (your private decision history) and bundles it with the hundred sets they don’t need your help on at all (public writing, general knowledge, best-selling nonfiction, other people’s Substacks) and then it charges you a subscription for the privilege of confusing the two. Obsidian Sync, Notion Plus, Readwise Reader, and a transcription service will run north of $400 a year. Most of what they hold is content the model could fetch free on demand.
Elegant as it is, Karpathy’s pattern inherits the same problem: its architecture assumes you will be feeding it a meaningful corpus. For a solo operator whose “knowledge” is 90% public and 10% private, the pattern gets the ratio backwards. You will spend enormous compute on beautifully cross-referenced wiki entries about material the model has already read a hundred times, and very little compute on the handful of decisions you need to get right this week.
Which brings me to the test I think every PKM tool has to pass in 2026: does it help me decide anything on Monday morning when I’m starting my week?
The bottleneck for a solo business is judgment + a willingness to ship an ugly first draft. You have read plenty already - no wiki fixes that. Claude doesn’t fix it either, although Claude at least talks back when you’re staring at a blank page. A wiki is a read-only companion for work you’re avoiding. With your private folder attached, a model in a browser tab is a working colleague for work you’re in the middle of doing.
The test of any knowledge system has always been retrieval. Capture is the cheap part - clip and forward, dictate and transcribe.
Capture scales with guilt, and retrieval scales with need.
For most of the people I’ve watched build these systems, the ratio between capture and consultation is catastrophic. I’ve lost count of how many beautiful graphs I’ve been shown where the owner can’t point to a single note from the last 6 months that changed what they did next.
I think we underestimate just how much a well-maintained archive reshapes your reading. You begin to read for what can be filed, and you stop reading for what could disturb you. The archive domesticates your attention over time, and Karpathy’s wiki pattern will accelerate this, because the feedback loop is tighter: you can watch the model generate an entry and feel the pleasant chemical hit of having “captured” the source.
The obvious counter is that the wiki is meant to externalize the boring parts, freeing you to think about the rest. But is externalizing the boring parts, at scale, actually worth the compute and the attention overhead? For a research lab, it probably is. For a person trying to keep 6 clients happy and ship a product this quarter, the overhead is larger than the savings, because the private part (the part that actually needs a system) was never the bottleneck.
What does a 2026 setup look like when you take all this seriously? A browser, a folder, a plain text journal, and an API key.
The browser runs a small number of tabs. One holds the model. One holds the work in progress. One holds whatever source you’re currently reading. A fourth, if you need it, holds a dated text journal for decisions. Ask the model when you need information. Write the decisions and the reasoning behind them in the journal. Future-you will want to know what past-you was thinking, because decisions compound when you can see them.
The folder holds the private material. That means project files, draft contracts, and a running log of client calls; it includes a scratch document per ongoing conversation and a WHAT file for the handful of things you can’t afford to forget; and the whole thing is plain text or markdown, because plain text survives format changes and your own future whims.
Much of the PKM economy is an insurance policy against regret. You save the article because one day you might want it, and you pay the subscription because one day you might go back. But one day rarely arrives. Meanwhile, the model in the other tab will find that article in 4 seconds and produce a better summary than the one you highlighted at 11 pm on a flight. The fear of losing access to a version of yourself who read something once is a residue from an older world, before models that could fetch anything.
By my observation, the people who ship the most work in ways that look embarrassingly primitive. Paul Graham has written on a single HTML page since 2001. Derek Sivers keeps his notes in plain text and has said so in every interview where someone asks. Patrick McKenzie has claimed for years that his main “system” is a file called ideas.txt.
If you want a small experiment, try this. For 30 days, close Readwise. Pause the Notion subscription. Archive the Obsidian vault. Turn off the daily highlight digest. Keep one private folder. Keep a dated text journal. When you need something, ask the model and read the source. Decide and move on. Write down what you decided and why, and see what (if anything) you miss.
Most people miss nothing except the ritual of feeling productive.
The ritual is the part that was costing $400 a year.
My life is already swimming in open water, and the thing I need from a model is another archive like I need another canteen. I need a colleague who can read anything I point at and help me decide what to ship. And what to do on a Monday morning.
Selfonomics is designed, built, and backed by Studio Self
We make tech legible.
Reach out: hello@thisisstudioself.com



