The Great Comeback of the Generalist
For most of modern life, being a generalist felt like a disadvantage. A nice personality trait, maybe. Useful for dinner conversations, but not exactly a strategy.
And yet, something is shifting. Not because specialists suddenly stopped mattering. They did not. But because a new tool changes the cost of getting started, learning the basics and producing a result that is good enough to be useful.
Of course, I'm talking about AI (artificial intelligence).
But this is not a post to hype AI nor is it an argument that everyone should use AI or that specialists matter less. It's an argument that for some goals—shipping, learning, experimenting—AI changes the economics of getting started.
Before we talk about AI, we need a quick history lesson first.
Act I: When Generalists Ruled
Early humans did not thrive because they were the best at one thing. They thrived because they were decent at many things.
We could hunt a bit, gather a bit, build shelter, read social situations, learn from others, adapt to seasons and switch strategies when something stopped working. In other words, we were flexible. We were generalists by necessity.
That flexibility is one of the most underrated advantages in nature. Specialists are incredible when the environment stays stable. Generalists win when the environment changes. Humans got lucky, then smart, then dominant. A lot of that story is really just adaptability at scale.
If you look at the early history of knowledge, it has the same flavor. What we now call science started out as a generalist pursuit. Philosophy, mathematics, biology, politics, ethics. All of it used to live in the same mental room. People like Aristotle wrote about pretty much everything. Not because they were superhuman, but because the knowledge graph was small enough that one mind could still roam it.
The same pattern repeats later. Ibn Sina wrote about medicine, philosophy and logic. Da Vinci moved between art, anatomy and engineering as if those were just different dialects of the same language. Polymaths were not a quirky exception–they were a product of the time. When the world is less specialized, breadth is normal.
Then the world changed.
Act II: Why Specialists Took Over
Knowledge didn't just grow–it exploded.
We industrialized. We built institutions. We developed tools that unlocked more tools. Every new layer of technology created new subfields. Every solved problem revealed ten new problems. At some point, no single person could keep up.
So we did the only sensible thing. We split the world into domains and assigned people to them. Specialization is not a mistake–it is a response to complexity. It is also one of the reasons modern life works at all. Depth creates rigor, rigor creates reliability, and in many domains, reliability is not optional.
But specialization comes with a bargain. You gain efficiency and quality, but you often lose end-to-end agency. You become a small, highly optimized piece inside a larger machine. You know your slice, but you cannot always move the outcome. You rely on other specialists and have to coordinate each other. That includes budgets, approvals, handoffs, meetings, timelines.
Modern career design reflects that reality. You pick a lane and become the person who knows that one thing deeply. For example, you're either a food chemist or a cloud platform engineer, but rarely both.
This was the dominant strategy for a long time—until we put a new technology into everyone's pocket.
Act III: The New Twist
AI looks like a paradox. It's clearly not a specialist and often gets things wrong. It hallucinates with perplexing confidence and amplifies bias depending on who trained it. And still, it feels like an all-knowing jack of all trades.
That feeling matters, because it changes behavior. For the first time in a long time, an average person can ask questions across many domains and get usable answers instantly. They can get a draft, a checklist, a debugging idea, a plan, a comparison, a tutorial, a second opinion. Not perfect, but helpful. This opens up new opportunities, but also real risks. I'll come back to that later.
This is unprecedented in history. For most of human life, expert knowledge was hard to access. The internet helped, but you still had to hunt, filter and teach yourself. Now you can ask a question and get a usable starting point instantly.
AI definitely doesn't replace experts (at least for now–and who knows if this will ever change), but it raises the floor for everyone else.
The question is not "can AI do everything". It cannot. The question is "can it get me far enough to start". Oftentimes, it can.
And that is exactly where generalism becomes interesting again.
Leverage, Not Mastery
A useful way to think about AI is not as a replacement for knowledge, but as a lever. It helps you bridge skill gaps.
You do not need to become a specialist in everything. You can borrow competence when you need it and move forward instead of getting stuck. And in many areas, you can do that without compromising on what the project is supposed to be. You're not aiming for mastery–you're aiming for motion.
This maps beautifully to the 80/20 rule. In many areas of life, the first 80 percent of a result comes from a small portion of the effort. The remaining 20 percent takes forever. That last stretch is where specialists shine. It is where craft lives and where quality becomes world-class. Just think about Japanese pottery for example.
But for most people, most of the time, the goal is not world-class. The goal is utility. You want a prototype, a first version, a working repair. Something you can test, validate and iterate on.
In business terms, this can mean shipping faster. A solo founder can get surprisingly far alone: draft landing page copy, generate design ideas, write a first version of a newsletter sequence, brainstorm offers, outline a content strategy and build a rough prototype. Will it be the best possible version? Definitely not. Will it exist? Yes. And oftentimes, that's the real differentiator.
But this is not just business. It is also side projects, hobbies and everyday life.
Maybe you want to learn piano, but you do not know how to practice effectively. AI can suggest practice routines, break down concepts and help you diagnose what you are doing wrong. You might still pick up piano lessons further down the line (and you should definitely do that!), but it can help you get going in the first place.
Or maybe your dishwasher breaks. In the past, your options were often frustrating: either buy a new one or have a specialist come over to fix it for a premium. Now you can troubleshoot the issue, identify what's actually broken and at least figure out whether this is a simple, safe 10€ part you can swap yourself or the kind of problem that should go straight to a professional. Even if you end up calling someone, understanding what's going on already changes how helpless the situation feels. By the way, this actually happened to me last year.
I like thinking about this through a music analogy:
A specialist musician might master one instrument, like violin. That depth is beautiful. It produces a kind of precision you cannot fake. Put them in an orchestra, and they'll do things a solo musician can only dream off. The richness, the nuance, the dynamics.
But a solo generalist musician can iterate faster. If your goal is virtuoso-level violin, breadth is a distraction and depth is the path; but if your goal is making music end-to-end on your own, being "good enough" on many instruments becomes leverage. It allows you to experiment freely, without coordinating twenty people. It can help you move from idea to sound in an afternoon.
AI brings that kind of leverage to many domains.
Not unlimited power. Not mastery. But momentum.
Which brings us to the concept that ties everything together.
Minimum Viable Competence
What I call "minimum viable competence" is essentially the level of skill required to produce a useful result. Not a perfect result, not the most elegant solution and not the version that wins awards. But a version that's good enough. Not a replacement for mastery, but a shortcut to agency when mastery is not the goal.
This idea matters because it changes your timeline.
If the only acceptable outcome is expert-level quality, you will delay starting. You will wait for credentials. You will wait for the right moment and perhaps never start at all.
AI makes minimum viable competence easier to reach in many areas. It compresses the time between curiosity and action. It can show you the steps, draft the first attempt, give you feedback and suggest what to try next.
Again, none of this replaces expertise. But it often replaces the long, painful gap between "I want to do this" and "I can finally do something about it".
That is why generalism is coming back.
Breadth is no longer an automatic disadvantage, because you can fill gaps when you need to. You can be the person who connects dots and ships something real.
Where It Breaks
That said, there are domains where "good enough" is simply not good enough.
Think about medicine, law, aviation, structural engineering or security. In those areas, errors are not just annoying–they are dangerous.
And even in low-stakes domains, AI is not reliable enough to be trusted blindly. It can be subtly wrong in ways that are hard to notice if you do not already know the topic.
So here is the clean rule of thumb:
Generalism shines when the work is exploratory, reversible and end-to-end.
Specialization shines when reliability, regulation and frontier depth matter.
AI does not change that. If anything, it makes it more important to be honest about it. The more powerful the tool, the more tempting it becomes to overestimate your competence. Which brings us to the most realistic approach.
The Barbell Generalist
If you want a practical pattern that holds up in the real world, it's probably this: Be broad enough to connect dots, communicate across domains and ship end-to-end. Have one or two deep spikes where you are genuinely hard to replace.
This is the barbell model. A wide surface area plus depth where it matters. It is also the antidote to shallow AI-driven confidence. Your deep spike gives you taste and judgment. It gives you a place where you can reliably evaluate quality.
Because the biggest risks of the "AI generalist" are predictable:
- Overconfidence: "AI said so" is not a reasoning process
- Hidden quality debt: things that work until they don't and fail in ways you can't debug
- Ethical slippage: originality, attribution, IP—using tools responsibly, not as a plagiarism machine
All of it points back to one meta-skill: critical thinking.
You cannot outsource responsibility.
You can use AI to generate options, but you must verify.
You can use AI to draft, but you must review.
You can use AI to explore a space, but you must stay skeptical.
Maybe now more than ever.
Conclusion
Generalism is not replacing specialization, but it is becoming a real option once again.
AI lowers the cost of reaching minimum viable competence across many domains which gives you leverage, agency and momentum.
If your goal is world-class craft, choose depth and commit. If your goal is momentum, learning and shipping end-to-end, breadth plus AI leverage can help. Most of us will do some of both at different seasons.
Just do not confuse "good enough to move forward" with "good enough to trust blindly".