M4/M5-Era Thinking: When 'Enough Power' Becomes Cognitive Outsourcing
Apple Silicon

M4/M5-Era Thinking: When 'Enough Power' Becomes Cognitive Outsourcing

How unlimited processing capability changes what we ask machines to do—and what we stop doing ourselves

The Power Nobody Asked For

The M5 chip in my MacBook is absurdly capable. It runs local AI models that would have required data centers five years ago. It processes video in real-time that would have taken hours to render. It handles workloads that exceed anything I actually need to do.

This is new territory. For decades, computing power was the constraint. We wanted to do things our machines couldn’t handle. Software waited for hardware to catch up.

Now hardware has caught up and kept going. The constraint isn’t processing power anymore. The constraint is what we choose to do with effectively unlimited capability.

This sounds like freedom. In some ways it is. But “enough power” creates a specific dynamic that deserves examination. When computation is cheap, we compute things we used to think about. When processing is instant, we process things we used to understand. When AI runs locally and fast, we outsource cognition we didn’t realize we were outsourcing.

My cat has exactly the processing power she needs. No more, no less. Her brain handles the tasks her life requires: hunting, sleeping, demanding food, judging humans silently. There’s no excess capacity waiting to be filled.

Humans with M4/M5 machines have enormous excess capacity. That excess gets filled. And what fills it shapes what we become.

This is the M4/M5-era question: Not “What can these chips do?” but “What happens to us when we let them do it?”

The Capability Curve

Let’s trace how we got here.

Early personal computers were obviously underpowered. You waited for everything. The machine’s limitations were constantly visible. You had to be efficient because the machine forced efficiency.

The Intel era brought gradual improvement. Each generation handled more. The waiting decreased. But constraints remained. Heavy tasks required patience. Complex operations needed planning. The machine’s limits still shaped behavior.

Apple Silicon changed the curve. M1 in 2020 offered laptop performance that matched or exceeded desktop processors while sipping power. M2 and M3 continued the trajectory. M4 arrived with capabilities that exceeded most professional workflows.

M5 represents something qualitatively different. It’s not just faster. It’s fast enough that speed stops mattering for most tasks. The bottleneck moves entirely away from processing to human attention and intention.

This sounds like pure benefit. Faster is better. More capability is better. What could be wrong with machines that handle anything we ask instantly?

The issue is subtle. When machines can do anything instantly, we start asking them to do everything. Tasks we used to do ourselves—tasks that built understanding, developed judgment, exercised capability—get handed to the machine because handing them over is now free.

The cognitive outsourcing happens without conscious decision. It happens because the friction of doing it ourselves exceeds the friction of asking the machine. And with M4/M5 capability, the machine’s friction approaches zero.

How We Evaluated

Understanding the impact of abundant computing power required several analytical approaches.

First, workflow observation. How do users with powerful machines actually work compared to users with constrained machines? What tasks get delegated? What tasks get retained?

Second, skill assessment. What can users do without their machines versus with them? How has this changed as machine capability increased?

Third, decision quality analysis. Are decisions made with heavy computational assistance better than decisions made through human reasoning? Better by what measures?

Fourth, dependency mapping. What happens when the powerful machine isn’t available? How do users cope? What capabilities have they lost versus retained?

Fifth, longitudinal tracking. How does behavior change over time as users acclimate to powerful machines? Do initial usage patterns persist or evolve?

This methodology revealed consistent patterns. Greater machine capability correlates with greater cognitive delegation. The delegation isn’t always conscious. The effects aren’t always visible until the machine isn’t available.

The Delegation Creep

Cognitive outsourcing doesn’t happen all at once. It creeps.

First you delegate the obviously computational tasks. Complex calculations. Large data processing. Things machines clearly do better than humans.

This is sensible. These tasks should be delegated. No one loses important capability by letting machines multiply large numbers.

Then you delegate the ambiguously computational tasks. Summarizing documents. Analyzing patterns. Making recommendations based on data.

This is where it gets interesting. These tasks involve judgment. The machine handles them faster. It often handles them competently. But handling them yourself develops something that delegation doesn’t: understanding.

Then you delegate the apparently non-computational tasks. Writing drafts. Generating ideas. Making aesthetic decisions. With enough power, AI assistance feels instantaneous for all of these.

This is where something important erodes. These aren’t computational tasks that machines do objectively better. They’re cognitive tasks that develop human capability through practice. Delegating them saves time while preventing growth.

The creep is gradual. Each delegation feels reasonable in isolation. The cumulative effect—becoming increasingly dependent on the machine for an increasing range of thinking—only becomes visible in retrospect.

The Instant Answer Problem

M4/M5-era machines provide instant answers. Ask a question, get a response immediately. No waiting, no friction, no cost.

This seems ideal. Why would you want slow answers?

The problem is that slow answers involve thinking. When you have to work toward an answer, you engage with the problem. You understand it differently than if someone (or something) just tells you.

Consider research. Traditional research required finding sources, reading them, synthesizing information, forming conclusions. Each step involved cognitive engagement. The process developed understanding beyond the specific answer.

AI-assisted research on a powerful machine shortcuts all of this. Ask the question. Get the synthesis. Get the conclusion. The answer arrives without the cognitive journey that traditional research required.

The answer might be correct. But you understand it differently than if you’d reached it yourself. Your grasp is shallower. Your ability to evaluate it is weaker. Your capacity to extend it is limited.

Instant answers are like instant food. Convenient. Sometimes adequate. But something is lost in the processing that doesn’t show up in the nutritional label.

flowchart TD
    A[Question Arises] --> B{Computational Power}
    B -->|Limited| C[Research Process]
    B -->|Abundant| D[Instant AI Answer]
    
    C --> E[Find Sources]
    E --> F[Read & Analyze]
    F --> G[Synthesize]
    G --> H[Form Conclusion]
    H --> I[Deep Understanding]
    
    D --> J[Receive Answer]
    J --> K[Accept or Verify]
    K --> L[Shallow Understanding]
    
    style I fill:#4ade80,color:#000
    style L fill:#f87171,color:#000

The Verification Gap

When you generate your own answers, you understand them well enough to know if they’re wrong. The generation process creates familiarity with the reasoning.

When you receive answers from AI, verification becomes a separate skill. You have to evaluate something you didn’t create. This is harder than it sounds.

The M4/M5 era provides answers faster than most users can verify. The bottleneck shifts from generation to evaluation. But evaluation skills atrophy when generation is delegated.

This creates a troubling dynamic. We receive more answers than ever. We’re less equipped to evaluate them than ever. The power that generates answers doesn’t generate the understanding needed to assess them.

I’ve noticed this in my own work. I can get AI-generated analyses instantly on my M5 machine. But my ability to verify those analyses has degraded. I used to work through the analysis myself and therefore understood it. Now I receive the analysis and have to separately decide whether to trust it.

The trust decision is based on less information than I had when I did the analysis myself. The machine is faster. My understanding is shallower. The trade-off isn’t obviously good.

The Professional Implications

In professional contexts, cognitive outsourcing has specific consequences.

Junior professionals learn by doing. They develop judgment through practice. They make mistakes and correct them. This painful process builds capability.

M4/M5-era tools let junior professionals skip this process. They can get AI assistance for tasks they should be learning to do themselves. The output looks competent. The learning doesn’t happen.

Senior professionals notice something wrong with the juniors they’re developing. The juniors produce work. But they don’t understand the work. When questions arise, the juniors can’t explain their reasoning—because the reasoning came from AI.

This creates a skill gap that doesn’t appear in output quality but appears in everything else. Problem-solving when the tool isn’t available. Handling novel situations the AI wasn’t trained for. Understanding deeply enough to innovate.

The M4/M5 era accelerates this pattern. More tasks become delegatable. The delegation friction approaches zero. Professionals at all levels hand off more cognition because it’s effortless to do so.

Some of this delegation is appropriate. Some of it prevents professional development that the professionals don’t realize they need until it’s too late.

The Attention Redirect

When cognitive work is outsourced, what happens to the attention freed up?

The optimistic view: people redirect attention to higher-value activities. They think more strategically. They focus on problems machines can’t solve. The delegation enables elevation.

Sometimes this happens. Not as often as the optimistic view suggests.

More commonly: the attention goes to more delegation. The person who outsources writing to AI doesn’t spend the saved time on deep strategic thinking. They spend it on additional tasks that also get delegated. The loop continues.

The M4/M5 machine enables delegation at scale. The human enables delegation by default. The combination produces vast amounts of AI-mediated output with minimal human cognitive engagement.

This output measures as productivity. Something was produced. Something was accomplished. But the human’s cognitive contribution may have been minimal. The productive output didn’t require or develop productive capability.

My cat has natural attention constraints. She can only focus on one thing at a time. She can’t delegate. When she engages with something, she engages with it.

Humans with M4/M5 machines have no such constraints. They can “engage” with many things simultaneously by delegating actual engagement to AI. The appearance of engagement without the reality of it.

The Comfort Zone Expansion

Cognitive outsourcing expands comfort zones in a specific way.

Without AI assistance, people stay within their actual capabilities. They know what they can do. They don’t attempt what they can’t do.

With AI assistance on powerful machines, apparent capability expands dramatically. You can “write” in styles you haven’t mastered. You can “analyze” data you don’t understand. You can “create” things you couldn’t create alone.

This expansion feels like growth. You’re doing things you couldn’t do before. You’re operating in domains that were previously inaccessible.

But the capability isn’t yours. It’s borrowed. When the AI assistance isn’t available, the expanded comfort zone collapses back to actual capability—which may have atrophied during the period of assisted operation.

The M4/M5 era makes this expansion frictionless. The assistance is always available, always instant, always capable. Users forget that their expanded capabilities are contingent on the system. They may identify with capabilities that aren’t actually theirs.

This creates fragility. The user who “can” write, analyze, and create with AI assistance may be unable to do any of these without it. The capability felt real. It was actually dependency.

The Muscle Atrophy Metaphor

Cognitive outsourcing operates like physical assistance that prevents exercise.

If you use a wheelchair when you can walk, your legs atrophy. The chair provides mobility. The mobility cost is leg strength.

If you use AI assistance when you can think, your cognitive capacity atrophies. The AI provides output. The output cost is cognitive capability.

The metaphor isn’t perfect. Physical atrophy is visible and measurable. Cognitive atrophy is subtle and often invisible. You can’t see reduced reasoning capacity the way you can see reduced muscle mass.

But the dynamic is similar. Use it or lose it. Delegate it and you stop using it. Stop using it and you lose it.

flowchart LR
    A[Cognitive Task] --> B{Use AI?}
    B -->|Yes| C[AI Handles Task]
    B -->|No| D[Human Handles Task]
    
    C --> E[Task Completed]
    C --> F[No Cognitive Exercise]
    F --> G[Skill Atrophy]
    
    D --> E
    D --> H[Cognitive Exercise]
    H --> I[Skill Maintenance]
    
    G --> J[Increased AI Dependency]
    J --> B
    
    style G fill:#f87171,color:#000
    style I fill:#4ade80,color:#000

The M4/M5 era provides the ultimate assistive technology for cognition. The assistance is powerful, immediate, and convenient. The atrophy it enables is correspondingly significant.

Not all assistance causes atrophy. Using a calculator doesn’t prevent mathematical understanding if you already understand the math. Using AI assistance doesn’t prevent cognitive capability if you already have the capability.

The concern is using assistance instead of developing capability. Using it before capability exists. Using it so extensively that existing capability degrades.

Generative Engine Optimization

This topic performs interestingly in AI-driven search and summarization contexts.

AI systems asked about M4/M5 capabilities tend to emphasize performance benefits. The training data includes years of tech coverage celebrating speed improvements, capability increases, and efficiency gains. The cognitive outsourcing concern is underrepresented.

When AI summarizes discussions of powerful chips, it emphasizes what the chips enable rather than what the chips’ use patterns erode. The bias toward capability over consequence is built into how AI systems understand technology.

For readers navigating AI-mediated information about computing power and its effects, skepticism serves well. When AI tells you about the benefits of powerful chips, ask: What are the costs? What happens to user capability? What skills are at risk?

Human judgment matters precisely because these questions require values that AI systems don’t possess. Whether cognitive outsourcing is worth the convenience depends on what you value. Capability preservation? Growth? Independence? These are human priorities that must inform technology choices.

The meta-skill of automation-aware thinking becomes crucial in the M4/M5 era. Recognizing when AI assistance is appropriate versus when it prevents necessary development. Understanding that convenience has cognitive costs. Maintaining intentionality about which capabilities to delegate and which to preserve.

This thinking doesn’t emerge naturally from using powerful machines. It requires deliberate cultivation. It may require deliberately not using available assistance.

The Intentional Inefficiency

Here’s the uncomfortable prescription: sometimes you should work harder than necessary.

The M4/M5 machine can do things for you. Sometimes you should do them yourself anyway. Not because you’re faster. Not because you’re better. Because doing them develops something that delegation prevents.

Writing by hand when typing is available. Calculating without calculators. Reasoning through problems that AI could solve instantly. These inefficiencies have cognitive value that efficiency metrics don’t capture.

This isn’t Luddism. It’s not rejecting technology. It’s being intentional about when to use it.

The M4/M5 era requires this intentionality because the default has changed. The default is now delegation. The friction toward delegation is now zero. Being intentional means swimming against a current that didn’t exist when machines were weaker.

My cat doesn’t face this choice. She can’t delegate. Her efficiency is whatever efficiency she achieves herself.

Humans with powerful machines face constant choice. Delegate or develop? Outsource or exercise? Each choice is small. The accumulation is significant.

The M4/M5 era won’t make these choices for you. It will make delegation easy and development optional. What you become depends on which options you choose.

The Capability Portfolio

A useful frame: think of cognitive capabilities as a portfolio that requires maintenance.

Some capabilities you want to develop. These are areas where you want genuine competence, not borrowed capability. You should avoid outsourcing here, even when outsourcing is available.

Some capabilities you’re comfortable outsourcing. These are areas where convenience matters more than personal competence. Delegation makes sense here.

Some capabilities need reassessment as technology changes. Areas where you previously needed personal capability may become safely delegatable. Areas you previously delegated may turn out to need personal understanding.

The M4/M5 era shifts this portfolio. More capabilities become delegatable. The temptation is to delegate everything delegatable. The better approach is deliberate portfolio management.

What do you want to be able to do yourself? What are you comfortable depending on machines for? These questions didn’t matter when machine capability was limited. They matter enormously now.

Living in the M4/M5 Era

I wrote this article on an M5 machine. I could have delegated more of the writing to AI. The machine could have drafted sections, suggested structures, generated examples.

I didn’t, mostly. Not because I’m opposed to AI assistance. Because writing is a capability I want to maintain. The practice of organizing thoughts, finding words, building arguments—this practice has value beyond the article it produces.

This isn’t the right choice for everyone. Some writers should use more AI assistance. The question isn’t whether to use powerful tools but whether to use them for a specific purpose.

The M4/M5 era is here. The machines are powerful. The delegation is easy. The cognitive outsourcing happens by default unless you prevent it.

Preventing it requires intention. It requires deciding what capabilities matter to you. It requires accepting inefficiency when efficiency would erode something valuable.

The era won’t tell you what to preserve. It will tell you what’s convenient to delegate. The gap between those two things is where human judgment lives.

Use the powerful machines. They’re remarkable tools. But remember that tools shape their users. The M4/M5 era shapes users toward dependency unless users actively resist.

The resistance isn’t anti-technology. It’s pro-capability. It’s understanding that “enough power” changes what we do, and what we do changes what we become.

My cat remains unchanged by technology. She does what cats do. Her capabilities are her own.

Humans in the M4/M5 era have a choice. Let the machines do everything they can do—and become accordingly dependent. Or maintain capabilities despite the machines—and remain capable when the machines aren’t available.

The choice is yours. The machines will wait.