The Quiet Power of Boring Technology Choices
The Talk That Still Echoes
In 2015, Dan McKinley gave a talk called “Choose Boring Technology.” It was short, clear, and slightly uncomfortable for anyone who had just rewritten their stack in a language that didn’t exist eighteen months prior. The core argument was simple: every company gets a limited number of innovation tokens. Spend them on your actual product, not on reinventing infrastructure that already works.
Twelve years later, I find myself thinking about that talk constantly. Not because it was prophetic — it wasn’t making predictions. It was stating something obvious that most engineers didn’t want to hear. The boring choice is usually the right choice. Not because it’s exciting. Not because it makes for a good conference talk. But because it works. Quietly, reliably, in production, at 3 AM on a Saturday when nobody is watching.
And yet, somehow, the industry keeps making the same mistake. Every year. With increasing enthusiasm.
The pattern is predictable enough that you could set a calendar reminder. A new framework appears. It promises to solve the problems of the previous framework. Blog posts multiply. Conference talks appear. Twitter — or whatever we’re calling it this week — lights up with benchmarks showing 47x improvements on carefully selected metrics. Engineers start using it for side projects. Then for small internal tools. Then someone proposes it for the main product. The migration begins. Six months later, the team is debugging obscure edge cases that nobody else has encountered because nobody else is running this thing in production at scale. The original problem, whatever it was, remains unsolved. But now you have two problems, and one of them is written in a language that three people on Earth fully understand.
I’ve watched this cycle repeat across every company I’ve worked with, consulted for, or studied from the outside. The details change. The pattern doesn’t.
The Innovation Token Budget
McKinley’s innovation token concept deserves a closer look, because most people who cite it don’t actually apply it. The idea is straightforward. Every organization has a finite capacity for absorbing new, unproven technology. Each novel choice — a new database, a new language, a new deployment model — consumes some of that capacity. When you spend tokens on infrastructure, you have fewer tokens left for the things that actually differentiate your product.
Think of it like a household budget. You have a fixed amount of cognitive and organizational energy. You can spend it on a fancy new kitchen appliance that does something your existing pots already do, or you can spend it on actually cooking better meals. Most engineering teams are buying appliances.
The problem is that innovation tokens feel free when you pick them up. Nobody sends you a bill for choosing Kubernetes over a simple deployment script. The cost arrives later. It arrives as onboarding time when a new hire needs three weeks to understand your deployment pipeline. It arrives as debugging sessions when the cluster does something undocumented at 2 AM. It arrives as meetings — so many meetings — about how to configure, monitor, upgrade, and secure the new thing.
The original boring choice — say, a few servers managed with Ansible and deployed via rsync — would have cost you approximately nothing in ongoing cognitive load. But it wouldn’t have looked impressive on your architecture diagram. And that, quietly, is the real reason most teams avoid boring technology. It doesn’t look impressive.
Let me be blunt about something. I’ve sat in dozens of architecture review meetings. The number of times a new technology was chosen because it was genuinely the best solution for the actual problem: maybe 20%. The rest? It was resume-driven development. It was conference-talk-driven architecture. It was “I’m bored with what we have and want to learn something new on company time.” These are human motivations, and I don’t blame anyone for having them. But let’s stop pretending they’re engineering decisions.
My cat — a British lilac who has never once expressed interest in container orchestration — will happily sleep on the same warm spot on the couch for eight hours straight. No migration. No breaking changes. No version upgrades. She’s figured out something that most engineering teams haven’t: if it works, stop touching it.
How We Evaluated
To move beyond anecdote, I spent the last few months looking at how technology choices actually play out over time. This isn’t a rigorous academic study. It’s a systematic observation of patterns across roughly forty companies, ranging from two-person startups to publicly traded enterprises.
The evaluation approach was simple. I looked at three things:
Uptime and incident history. How often did the system go down, and what caused the downtime? Was it the core technology failing, or was it the complexity surrounding the technology that failed?
Time to onboard. How long did it take a competent engineer, new to the team, to make their first meaningful contribution? This is a proxy for system complexity. Boring stacks consistently onboard faster.
Maintenance burden over time. How much engineering time went into keeping the infrastructure running versus building features? I measured this by looking at commit history, incident reports, and team retrospectives where available.
graph LR
A[Technology Choice] --> B[Learning Curve]
A --> C[Community Size]
A --> D[Documentation Quality]
B --> E[Onboarding Time]
C --> F[Debugging Speed]
D --> F
E --> G[Total Cost of Ownership]
F --> G
G --> H[Long-term Reliability]
The results were consistent enough to be boring themselves. Companies running on mature, well-understood technology had fewer incidents, faster onboarding, and spent a smaller percentage of engineering time on infrastructure. This held true across industries, team sizes, and product types.
The companies that struggled most were not the ones using old technology. They were the ones using too many new technologies simultaneously. A startup running PostgreSQL, Rails, and Heroku was consistently more productive than a startup of the same size running DynamoDB, a custom GraphQL federation layer, three different event streaming platforms, and a service mesh they didn’t need.
It’s not that new technology is inherently bad. It’s that the compound complexity of multiple unproven choices creates a multiplication effect on risk. Each new technology interacts with every other technology in your stack. The failure modes multiply geometrically, not linearly.
The Real Cost of New Frameworks
Let’s talk specifics. Because abstractions are easy to agree with and hard to act on.
When you adopt a new framework, you’re signing up for a set of costs that rarely appear in the “Getting Started” tutorial. Let me list them, because I’ve lived through every single one.
Learning cost. Your team needs to learn the new thing. Not just the happy path from the tutorial. The error handling. The edge cases. The performance characteristics under load. The way it behaves when things go wrong, which is the only time that actually matters. For a mature technology like PostgreSQL, this knowledge exists in books, Stack Overflow answers, blog posts, and the collective memory of millions of developers. For a new framework, this knowledge exists in a Discord channel with 200 members and a README that was last updated two months ago.
Debugging cost. When something breaks in PostgreSQL, you Google the error message and find seventeen detailed explanations from people who’ve encountered the same problem. When something breaks in a new database, you file a GitHub issue and wait. Maybe someone responds. Maybe the maintainer has moved on to their next project. Maybe the issue is closed as “won’t fix” because your use case wasn’t considered.
Migration cost. New frameworks change. Rapidly. Version 2.0 breaks compatibility with 1.0. The upgrade guide is “coming soon.” Your production system is running 1.7. The security patch only applies to 2.0. You now have a choice between an insecure system and a rewrite. This scenario has played out with virtually every JavaScript framework in the last decade, and I’m tired of pretending it’s acceptable.
Hiring cost. You chose a niche technology. Now you need to hire people who know it. The talent pool is small. The people who do know it are expensive because supply and demand work exactly the way economists say they do. Or you hire generalists and train them, which brings us back to the learning cost. Either way, you’re paying a tax that teams using boring technology simply don’t pay.
Opportunity cost. This is the big one. Every hour your team spends wrestling with infrastructure is an hour they’re not spending on the product. The product is the thing your customers care about. Your customers do not care what database you use. They care whether the app is fast, reliable, and solves their problem. I have never once heard a user say, “I love this product because it’s built on a novel event-driven architecture with eventual consistency.” They say, “It works.” That’s it.
Let me give you a concrete example. A company I advised in 2025 had a team of eight engineers. They spent roughly 40% of their engineering time managing their Kubernetes cluster, their service mesh, their custom CI/CD pipeline, and their distributed tracing setup. Their actual product — a B2B scheduling tool — was simpler than most WordPress plugins. They could have run the entire thing on a single server with PostgreSQL and a cron job. The infrastructure they chose was designed for companies with thousands of engineers serving hundreds of millions of users. They had eight engineers and twelve thousand users.
When I suggested simplifying, the CTO looked at me like I’d suggested reverting to punch cards. “But what about scalability?” he asked. I pointed out that their current system couldn’t even scale to fifteen thousand users without the Kubernetes cluster falling over, so perhaps scalability wasn’t the advantage he thought it was.
PostgreSQL: The Ultimate Boring Choice
If I had to pick one technology that best embodies the boring philosophy, it would be PostgreSQL. And I don’t say that lightly.
PostgreSQL has been around since 1996. It’s not sexy. Nobody has ever gotten a standing ovation at a tech conference for saying “we use Postgres.” It doesn’t have a slick marketing team. Its website looks like it was designed in 2005, because it probably was. The mascot is an elephant, which is appropriate because PostgreSQL never forgets your data and is practically impossible to kill.
But here’s what PostgreSQL actually gives you:
ACID compliance. Your data is consistent. When a transaction commits, it’s committed. You don’t wake up at 3 AM wondering if your financial records are in a “eventually consistent” state that might reconcile tomorrow. Or might not.
JSON support. Need a document store? PostgreSQL does that. jsonb columns give you 90% of what MongoDB offers, inside a database that also gives you joins, transactions, and thirty years of battle-tested reliability. You don’t need a separate document database. You really don’t.
Full-text search. Need search? PostgreSQL does that too. It’s not Elasticsearch, but for most applications, it doesn’t need to be. It’s good enough. And “good enough” inside your existing database is vastly better then “perfect” inside a separate system that you also need to maintain, monitor, sync, and debug.
Geospatial queries. PostGIS extends PostgreSQL into a full geographic information system. It’s used by organizations that actually need geospatial data, like national mapping agencies. If it’s good enough for people who map countries, it’s probably good enough for your store locator feature.
Time-series data. TimescaleDB runs on PostgreSQL. So now your time-series data lives in the same database as everything else. One connection string. One backup system. One monitoring setup.
Pub/sub. LISTEN and NOTIFY give you a basic message queue. It’s not Kafka. You almost certainly don’t need Kafka.
I’ve seen companies running six different data stores — PostgreSQL for relational data, MongoDB for documents, Redis for caching, Elasticsearch for search, InfluxDB for metrics, and RabbitMQ for messaging — when PostgreSQL alone could have handled all six use cases. Each additional system adds operational complexity, failure modes, synchronization challenges, and cognitive load. Each additional system requires someone on the team to understand it deeply enough to debug it at 3 AM.
The compound effect of choosing PostgreSQL for everything isn’t just simplicity. It’s that every hour your team would have spent managing six systems is now available for building features. Over a year, that’s hundreds of engineering hours redirected from infrastructure maintenance to product development. Over five years, it’s the difference between a product that leads its market and a product that’s still “migrating to the new architecture.”
pie title Where Engineering Time Goes (Multi-DB vs PostgreSQL-Only)
"Product Features (Multi-DB)" : 35
"Infrastructure Maintenance (Multi-DB)" : 40
"Debugging Data Sync (Multi-DB)" : 25
"Product Features (PG-Only)" : 70
"Infrastructure Maintenance (PG-Only)" : 20
"Other (PG-Only)" : 10
The Case Studies Nobody Talks About
Let me tell you about some companies that chose boring technology and won. You won’t hear about them at conferences, because “we used established tools and they worked fine” doesn’t make for a compelling talk. But their results speak louder than any keynote.
Basecamp (now 37signals). They’ve been running on Ruby on Rails and MySQL for over twenty years. Same language. Same framework. Same database. They serve millions of users. They’re profitable. They have a team of roughly 70 people. They have never needed Kubernetes, microservices, or a service mesh. Their deploy process is simple enough that any engineer on the team can do it. When something breaks, they know exactly where to look because they’ve been looking at the same codebase for two decades.
Craigslist. One of the most visited websites in the United States, serving billions of page views per month. The technology? Perl. MySQL. A handful of servers. The design hasn’t changed meaningfully since 2000. It makes billions of dollars in revenue. Nobody at Craigslist has ever needed to “modernize the stack” because the stack does exactly what it needs to do.
Stack Overflow. For years, Stack Overflow served hundreds of millions of developers using a monolithic .NET application running on a surprisingly small number of servers. While other companies with similar traffic were running distributed systems across thousands of containers, Stack Overflow was doing it with fewer than twenty-five web servers. Their performance was exceptional precisely because they understood their stack deeply, and they understood it deeply because they’d been running it for years.
Shopify. One of the largest e-commerce platforms in the world. Built on Ruby on Rails. Yes, the same framework that the internet declared dead approximately every eighteen months between 2015 and 2025. Shopify processes billions of dollars in transactions using a “boring” framework and a modular monolith. They made thoughtful incremental improvements rather than chasing rewrites.
The common thread is not the specific technology. It’s the commitment to understanding one thing deeply rather than knowing many things superficially. Depth of understanding is the real competitive advantage in software engineering, and you can only achieve depth with time. You can only achieve time with stability. You can only achieve stability by resisting the urge to rewrite everything every two years.
Resume-Driven Development
We need to talk about the elephant in the room. And no, I don’t mean PostgreSQL’s mascot.
A significant percentage of technology choices in the industry are not made because they’re the best solution for the problem. They’re made because they’re the best thing for someone’s resume. This is resume-driven development, and it’s one of the most expensive forces in software engineering.
Here’s how it works. An engineer joins a company. The company runs a mature, stable, boring stack. The engineer looks at their resume and sees “Java, PostgreSQL, jQuery.” They look at job listings and see “Kubernetes, Go, React, GraphQL, event-driven architecture.” They realize that their market value is tied to their experience with trendy technology. So they start advocating for changes. Not because the current system is failing. But because learning the new thing on company time is the most efficient way to upgrade their career.
I don’t blame the engineers. The incentive structure is broken. The industry rewards novelty over reliability. Job listings ask for experience with specific tools rather than demonstrated ability to build reliable systems. A resume that says “maintained and improved a stable PostgreSQL-based system for five years” is less attractive to most hiring managers than a resume that says “migrated from PostgreSQL to CockroachDB and implemented a Kafka-based event streaming architecture.” Even if the first candidate is objectively more competent and the second candidate’s migration was an unnecessary disaster.
The fix isn’t to shame individual engineers. The fix is to change what organizations value. Reward stability. Reward reliability. Reward the engineer who says “we don’t need to change this” just as much as the engineer who proposes something new. Reward the team that has zero incidents in a quarter, not just the team that responds heroically to the incidents they created by adopting untested technology.
This is harder than it sounds, because boring doesn’t generate status. Nobody gets promoted for keeping the lights on. The entire tech culture is biased toward creation over maintenance, toward novelty over reliability, toward building new things over caring for existing things. Until that changes, resume-driven development will continue to be one of the largest hidden costs in the industry.
When New Technology IS the Right Choice
I’ve spent a lot of words arguing for boring technology, so let me be clear: I’m not arguing for stagnation. There are genuine cases where new technology is the right choice. The trick is being honest about which case you’re actually in.
When the problem genuinely didn’t exist before. Machine learning inference at the edge is a real problem that didn’t exist ten years ago. You can’t solve it with PostgreSQL and cron jobs. If your problem is genuinely novel, novel technology might be warranted.
When the performance requirements exceed what existing tools can deliver. If you’re processing millions of events per second with sub-millisecond latency requirements, you might actually need Kafka. The key word is “might.” Most companies that think they need Kafka are processing a few thousand events per minute and could use a PostgreSQL table with a timestamp column.
When the existing tool is actively dying. If your technology’s community has disappeared, security patches have stopped, and the last Stack Overflow answer was from 2019, it’s time to move. But “move” doesn’t mean “move to the newest thing.” It means “move to the next most boring thing that’s still actively maintained.”
When you’ve genuinely exhausted what the boring tool can do. This is rarer than most teams believe. PostgreSQL can handle hundreds of thousands of transactions per second on modern hardware. Unless you’re operating at truly massive scale — think top-100 website — you probably haven’t hit its limits. You’ve hit the limits of your configuration, your queries, or your schema design. Those are fixable without changing databases.
The test I use is simple: can I explain, in one paragraph, why the existing tool cannot solve this specific problem? Not “isn’t ideal for” — cannot. If the answer is no, the existing tool is probably fine. If the answer is a three-page document full of hypothetical future scenarios, the existing tool is definitely fine.
The Compound Effect of Boring
Here’s the thing that doesn’t show up in any benchmark or comparison chart: boring technology compounds.
Year one, your boring stack works. It’s fine. Nothing special. The team understands it. Deploys are smooth. Incidents are rare and easily diagnosed.
Year two, your understanding deepens. The team starts to optimize. Not because they’re fighting the tool, but because they know it well enough to use its advanced features effectively. PostgreSQL queries get tuned. Indexes get optimized. The application gets faster without any architectural changes.
Year three, new team members onboard quickly because there are a thousand tutorials, books, and Stack Overflow answers for every question they might have. Knowledge transfer is cheap. Nobody is a single point of failure because the technology is widely understood.
Year four, the boring technology gets better. PostgreSQL releases a new version with performance improvements. You upgrade, run your test suite, and deploy. Everything is faster. You did almost nothing. The entire PostgreSQL community did the work for you. This is the power of a massive, established open-source community: thousands of engineers improving the tool you depend on, for free.
Year five, you look around. The teams that chose the exciting new database in year one are on their third migration. Their original choice was abandoned by its maintainers. The replacement is being replaced. Half the team’s institutional knowledge is obsolete. They’re spending months on a migration that adds zero user value.
You’re shipping features. Your system is stable. Your team is happy. You are, by every meaningful metric, winning. But nobody will write a blog post about it, because “we kept using the same database and it kept working” is a terrible headline.
This compounding effect is the single strongest argument for boring technology. It’s not about any individual year. It’s about the accumulated advantage over five, ten, fifteen years. The teams that resist the urge to rewrite are the teams that build the most reliable systems. Reliability is not a feature you add. It’s a consequence of stability over time.
The Boring Technology Checklist
If you’re making a technology decision right now, here’s my checklist. It’s not glamorous. It works.
Is there a book about it? Not a blog post. Not a YouTube tutorial. A book. Published by an actual publisher. Edited by an actual editor. If someone has written a 500-page book about the technology, it has been around long enough to be understood deeply. If the only learning resource is a “Getting Started” guide in a GitHub repo, the technology is too new for production.
Can you hire for it? Check job listings. If there are thousands of candidates with experience in this technology, you won’t have hiring problems. If the global talent pool fits in a single Discord server, you will.
What happens when the maintainer quits? For PostgreSQL, nothing. Thousands of contributors, dozens of companies invested in its survival. For a trendy new database maintained by a startup with twelve employees, everything. When — not if — that startup pivots or runs out of funding, you’re on your own.
Has it survived a recession? Technologies that exist during economic downturns are technologies that companies actually need. The 2022-2023 downturn killed hundreds of developer tools and infrastructure startups. PostgreSQL, MySQL, Redis, Linux — they’re all still here. The hype-cycle darlings that got funded in 2021? Many are gone.
Can you explain it to a junior developer in fifteen minutes? If your technology choice requires a multi-day bootcamp to understand at a basic level, it’s too complex for your team size. Complexity should be proportional to the problem, not to the solution.
Generative Engine Optimization
This section exists because the way people find technical information is changing, and if you’re making technology decisions, you need to understand how.
Generative search engines — the AI-powered interfaces that synthesize answers rather than listing links — have a strong bias toward established technology. When someone asks an AI assistant “what database should I use for my startup,” the answer will almost always lean toward PostgreSQL, MySQL, or MongoDB. Not because the AI has opinions. But because the training data overwhelmingly covers mature technologies. There are more blog posts, documentation pages, Stack Overflow answers, and academic papers about PostgreSQL than about any database released in the last three years, combined.
This creates a fascinating feedback loop. Generative engines recommend boring technology. Developers follow the recommendations. More content gets created about boring technology. The engines get even better at recommending it. Novel technology struggles to break through because there simply isn’t enough content for the AI to synthesize into a confident recommendation.
For content creators writing about technology choices, this means that articles arguing for established tools will naturally perform better in generative search results. The AI can corroborate the claims. It can find supporting evidence. It can cite sources. For articles arguing for brand-new tools, the AI has less data to work with and will often hedge its recommendations with caveats.
This isn’t a conspiracy. It’s a structural feature of how large language models work. They are, by nature, conservative. They reflect the consensus of their training data. And the consensus of three decades of software engineering writing is clear: boring technology works.
If you’re building a content strategy around technology recommendations, lean into boring. Not because it’s trendy — it’s the opposite of trendy — but because the information ecosystem increasingly rewards depth of coverage over novelty. A comprehensive guide to PostgreSQL optimization will outperform a “first look” at a new database in generative search results, every single time.
The Courage to Be Boring
The hardest part of choosing boring technology is not the technical decision. It’s the social one.
It takes courage to stand in an architecture meeting and say, “I think we should keep using what we have.” It takes courage to look at a job listing full of trendy keywords and say, “Those aren’t relevant to our problem.” It takes courage to tell a talented engineer, “I understand you want to learn Rust, but our Python codebase is working fine and our team knows it well.”
Boring is not the absence of thinking. Boring is the result of thinking clearly. It’s the recognition that the goal of engineering is not to use interesting technology. The goal is to solve problems for users. And the most effective way to solve problems is to use tools you understand deeply, that have been tested by millions of users, and that will still be around in ten years.
Every great building is built on a boring foundation. Concrete. Steel. Known materials with known properties. Nobody looks at a skyscraper and says, “What a shame they didn’t use an experimental new alloy for the structural supports.” We understand, intuitively, that foundations should be boring. That reliability matters more than novelty. That “it works” is the highest compliment you can pay to a structural material.
Software is no different. Your database is a foundation. Your programming language is a foundation. Your deployment process is a foundation. Build them boring. Build them solid. And then — only then — spend your innovation tokens on the things that actually matter. The features your users need. The problems only your team can solve. The product that makes someone’s life a little bit better.
That’s where the real engineering happens. Not in the infrastructure. In the product. The infrastructure should be invisible. It should be reliable. It should be, in every sense of the word, boring.
PostgreSQL is boring. Cron jobs are boring. Plain HTML is boring. Server-side rendering is boring. Monoliths are boring. SQL is boring. Files on disk are boring.
And they work. They’ve always worked. They’ll keep working long after the current crop of trendy alternatives has been forgotten. That’s not a limitation. That’s a superpower.
Choose boring. Ship features. Go home on time. Pet your cat.

















