12 Favorite Problems
You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, "How did he do it? He must be a genius!"
— Richard Feynman
- What really matters more for a product? Distribution? Or having a good product?
- The most obvious case that comes to mind is Roy Lee's Cluely, the AI cheating tool that went so viral it landed a $15M Series A from a16z at a $120M valuation. However, soon after the investment, it became almost crystal clear to users that the product could not sustain what the distribution initially promised. A series of issues plagued cluely's product: latency issues, a data breach, and ultimately an un-winnable arms race between interview platforms to ensure hiring integrity.
- These issues ultimately led to Cluely's pivot to enterprise software, ditching the original "undetectable cheating" product altogether. Allegedly, ARR doubled within a week of the pivot, but that's only because the underlying technology found a use-case where the product actually mattered.
- What are the remaining barriers to AGI that can't be solved by scaling compute and engineering effort?
- If AI drives the marginal cost of implementation to zero, what makes one engineer more valuable than another?
- The obvious candidates are architectural judgment, product intuition, or something like taste — but those feel like they might just be the next things to get automated. Maybe the answer is something less legible, like knowing which problems are worth solving in the first place.
- If anyone can build anything, does the concept of a product still make sense?
- If the cost of building is near-zero, maybe the unit of value shifts from the product to something upstream — taste, curation, knowing what to build. Or maybe products just become disposable and single-use.
- If engineering becomes near-free, does open source lose its reason to exist?
- The easiest counter-argument that comes to mind is the lack of engineering taste for in-house builders. However, what exactly is taste, and is it something that can be solved using more data? This ultimately goes back to the taste question above.
- Is the development and maturity of LLMs and Agents akin to the industrial revolution?
- A few years ago, I attended an event where Sam Schillace, then Microsoft's deputy CTO, was speaking. He argued that the development of modern LLMs represents a pivotal moment in human progress, akin to the industrial revolution. Just as the steam engine created a surplus of energy, massively transforming productivity, transportation, and nearly every aspect of civilization, modern AI is creating a surplus of intelligence.
- I find the analogy compelling, but I think it starts to break down when you look closer. The steam engine transformed the physical world. It moved things, built things, pumped things. AI's impact is mostly informational and cognitive, which makes the internet feel like a closer comparison, at least in the short run. The internet restructured who could do what: who could publish, who could build, who could sell — AI feels similar. A solo developer can now ship what used to require a team. And the existing structures for absorbing this kind of shift (VC, corporations, platforms) already exist, because the internet paved those roads.
- But in the long run, maybe the steam engine analogy comes back. If agents mature toward genuine autonomous execution, not just synthesis but actually initiating, deciding, and iterating without a human in the loop, then the internet analogy breaks too. The internet still required a human at both ends. Autonomous agents don't. That starts looking less like a new communication medium and more like a new kind of worker. I think the uncomfortable argument behind this is that the short run might lull us into thinking this is "just another internet," while the long-run shift is something our existing institutions aren't really built for.
- Jan Kulveit makes a related (though more extreme) argument about post-AGI economics: that the economic frameworks we use to reason about institutions assume things about human agency, stable preferences, and the distinction between capital and labor that may not survive contact with sufficiently advanced AI. Even if we're not at AGI yet, some of these cracks are already showing.
- What exactly is taste?
- Is taste ultimately a data problem?
- In a world with a surplus of intelligence, does agency matter more than intellect?
- If taste is a data problem, and data comes from action, is agency the only way to develop judgment?
- Can agency be engineered?
- If intelligence (the ability to reason), taste (the ability to judge), and agency (the ability to act) can all be engineered — what, if anything, remains that can't be?