The story I usually tell goes: Nokia → Realm → YC S11 → $40M raised → MongoDB acquisition.
That's the highlight reel. What it omits is the specific texture of the Nokia chapter — what I actually did there, what it actually taught me, and why I think those thirteen years did more to shape how I build than anything that came after.
I've never written about this. The Nokia chapter is the unglamorous foundational work that comes before any of the founder glamour, and there isn't an obvious narrative hook that makes it interesting to describe. It doesn't have a funding announcement or an acquisition. It's just: worked at a large technology company, built infrastructure, learned things.
But I've spent enough time advising early-stage founders now to know that the things I'm most confident about — and that most consistently distinguish my advice from advice founders get from people who went straight from university to a startup — come from that decade. So here's the actual content of it.
What I actually did
Software configuration management. Build system tooling. Low-level infrastructure for software deployed on hundreds of millions of devices across dozens of countries and dozens of languages.
Nokia's software at that time was a genuinely complex global software project. Not complex in the way a startup's software is complex — complex in the way a project where a thousand engineers are contributing to the same codebase, where each phone model requires a specific build configuration, where a single bug might affect hundreds of millions of users who can't simply update to the new version, is complex.
My job was the hidden infrastructure. The build pipelines. The configuration management systems. The tooling that made it possible for a thousand engineers to work on the same software without constantly breaking each other's work. Nobody thinks about this layer until it breaks. When it breaks, the entire project stops.
This is what I mean by "hidden infrastructure." It's the work that enables all the other work, that gets no credit when it functions correctly, and that paralyzes everything when it doesn't. I spent over a decade building and maintaining systems in this category.
What it taught me that I couldn't have learned elsewhere
How software actually fails at scale.
I don't mean "our server went down" scale. I mean: software that has already shipped to hundreds of millions of devices, can't be remotely patched without significant friction, and needs to continue functioning correctly in an enormous variety of hardware configurations, network environments, and usage patterns that were not all modeled in testing.
The things that kill software at that scale are almost never the things that kill it in development. The build was correct. The unit tests passed. The integration tests passed. And then some specific configuration of device, carrier settings, and user behavior that occurred in volume in one country but hadn't been tested produced a failure mode nobody had anticipated.
This taught me something that I try to transmit to every engineering team I work with: the failure modes that matter are almost always the ones outside the envelope your tests can reach. Your tests cover the cases you imagined. Production exposes the cases you didn't imagine. Resilience to unexpected failures — defensive coding, graceful degradation, the ability to fail safely — matters more than comprehensive test coverage of the happy path.
The real cost of coordination overhead.
Nokia at its peak was running software projects with hundreds of engineers contributing to the same codebase. The coordination mechanisms required to make this work — the code review processes, the change management systems, the release approval chains, the integration test environments — consumed an enormous fraction of the organization's engineering capacity.
I watched this up close for over a decade. The insight I came away with: every additional engineer you add to a project increases the required coordination overhead by more than the obvious amount. Doubling headcount does not double throughput. In many cases it doesn't even come close.
This is the observation that made me skeptical, when we scaled Realm from 20 to 68 people, that we were actually getting proportional output growth. We weren't. The coordination overhead at 68 people was substantially higher than at 20. Some of the overhead was necessary; some of it was avoidable waste. But all of it was real, and none of it was free.
The muscle memory for building systems that survive failure.
When you build systems that run on hundreds of millions of devices, failure is not a surprising event. It's a design parameter. The question is not whether the system will fail — it will — but whether the failure is recoverable and whether the system can continue to function in a degraded state.
The system I think about most often from that period was a build configuration management layer that I worked on for several years — the system that tracked which combination of software components, in which versions, went into which device variant. Nokia shipped dozens of phone models across dozens of markets, each with carrier-specific software variations, regional regulatory requirements, and language configurations. The combinatorial space was genuinely enormous. The system I was responsible for had to make it possible to answer, at any point, exactly what software was running on a specific device category in a specific market — and to reproduce any build configuration from years earlier if a support issue required it. The design decision that made it work was immutability: every build configuration was recorded as an immutable snapshot. No updates, no overwrites. If something changed, you created a new configuration record. This sounds simple and it is, but the alternative — mutable configuration records with a change history — produced audit trail problems we had watched cause failures in earlier systems. Immutable configuration was boring to implement and completely reliable in production. That pattern is still how I think about systems where correctness matters more than flexibility.
When we built Realm's storage engine in C++, I was drawing on muscle memory from Nokia. The specific design choices — the zero-copy memory mapping, the MVCC implementation that let reads and writes proceed concurrently without blocking — were shaped by a decade of watching systems fail in production and understanding what made failures recoverable.
Why the unglamorous decade matters
The startup mythology has a specific shape: young founder, no previous experience, builds a unicorn through sheer force of will and brilliant insight. The narrative is appealing. It's also, in my experience, not descriptive of how the most durable companies actually get built.
The founders I've seen build the most technically sophisticated companies tend to have a decade of pre-founder experience doing something that was not directly glamorous — working on infrastructure, building at scale inside a large organization, doing the kind of technical work that gets you deep knowledge of how systems fail rather than how they perform in demos.
This is not a universal rule. There are genuine exceptions. But when I look at the specific things that made Realm's architecture right — the choices that held up at 2 billion device installs — most of them trace back to patterns I learned at Nokia.
The 80x performance gains that developers described as magic weren't magic. They were the application of muscle memory from Nokia to a problem that most mobile database builders hadn't encountered because they hadn't worked at that scale.
The foundational decade is not a consolation prize for not founding a company in your twenties. For a specific kind of builder, it's the education that makes the company possible at all.
I should write about it more.