Apple Intelligence One Year Later: What Actually Shipped and What Quietly Died
The Keynote vs. the Kitchen
There is a particular kind of dishonesty that only Apple can pull off with a straight face. It looks like a demo. It sounds like a promise. And it ships, eventually, in a form that makes you wonder if you imagined the original presentation.
One year ago, Apple Intelligence was the centrepiece of WWDC 2026. Craig Federighi walked us through a vision of seamless, private, intelligent computing. Siri would finally understand context. Writing tools would transform your prose. Image generation would let you create personalized visuals without leaving Messages. On-device processing would keep your data safe while cloud-based “Private Cloud Compute” would handle the heavy lifting transparently.
It was, by Apple’s standards, an ambitious pitch. By anyone else’s standards, it was table stakes — Google and Samsung had shipped comparable features months earlier. But Apple’s argument was different. They weren’t first. They were going to be better. More private. More integrated. More Apple.
So here we are, twelve months later. I’ve been using Apple Intelligence daily across an iPhone 16 Pro, an M4 MacBook Air, and an iPad Pro. My British lilac cat, who lounges next to my keyboard during every writing session, has been an unwilling participant in dozens of Image Playground experiments. The results have been… instructive.
This is not a review of what Apple announced. This is a review of what Apple delivered. The gap between those two things is where the real story lives.
Let me walk through what actually happened.
How We Evaluated
Before diving into individual features, let me explain how I approached this assessment. It would be easy to cherry-pick examples — a brilliant Siri response here, a terrible image generation there — and construct whatever narrative I wanted. I’ve seen plenty of that from both sides.
Instead, I tracked my usage systematically over the past six months. Every day, I noted which Apple Intelligence features I used, whether they worked as expected, and whether I would have been better off doing the task manually. I also compared results directly with equivalent features on a Pixel 9 Pro and a Samsung Galaxy S27 Ultra, both of which I keep as secondary devices.
The methodology is simple but honest. I’m not running benchmark suites or conducting laboratory experiments. I’m a writer and technologist who uses these tools in real work. My evaluation criteria are straightforward:
- Does it work reliably? Not once, not in a demo, but consistently over weeks of daily use.
- Is it faster than the alternative? If an AI feature takes longer than doing the task manually, it has failed.
- Is the output quality acceptable? Not perfect — acceptable. Good enough that I wouldn’t redo it.
- Does it respect privacy as promised? This one is harder to verify, but I’ve done what I can.
I scored each major feature on these four criteria. The results are sobering.
graph LR
A[Apple Intelligence Features] --> B[Writing Tools]
A --> C[Siri Improvements]
A --> D[Image Playground]
A --> E[Notification Summaries]
A --> F[Visual Intelligence]
A --> G[Private Cloud Compute]
B --> B1["✅ Shipped & Works"]
C --> C1["⚠️ Partially Shipped"]
D --> D1["⚠️ Shipped but Limited"]
E --> E1["✅ Shipped & Works"]
F --> F1["❌ Barely Shipped"]
G --> G1["✅ Shipped & Works"]
Writing Tools: The Quiet Success Story
Let’s start with what actually works. Writing Tools is the feature Apple talked about least during the keynote and the one that delivers the most value in daily use.
The proofreading function is genuinely good. I write between three and five thousand words a day, and Writing Tools catches things that Grammarly misses. Not grammatical errors — I rarely make those — but stylistic problems. Passive constructions. Unnecessarily complex sentences. Paragraphs that bury the point. It’s not revolutionary. But it’s useful.
The rewrite feature is more interesting. You can select a block of text and ask Writing Tools to make it more concise, more professional, or friendlier. The “concise” option is the one I use most. It consistently cuts word count by 20-30% while preserving meaning. That’s a real productivity gain for someone who writes for a living.
The summarization function works well for emails and long articles. I use it daily to process my inbox. It correctly identifies the key points in messages about 85% of the time. The remaining 15%, it either misses a crucial detail or overemphasises something minor. That’s not perfect, but it’s good enough that I check summaries first and only read the full message when something seems off.
Where Writing Tools falls short is creative writing. If you’re drafting fiction, poetry, or anything that requires a distinctive voice, the suggestions are actively harmful. They flatten everything into corporate-pleasant prose. This isn’t surprising — it’s how language models work — but Apple never mentioned this limitation.
The real victory of Writing Tools is integration. It’s everywhere. Every text field, every app, every context. You don’t need to copy text to a separate application. You don’t need to switch contexts. It’s just there, in the right-click menu, waiting. This is what Apple does best — not inventing the technology, but making it frictionless.
I’ll give them credit where it’s due. Writing Tools is the most polished implementation of AI writing assistance on any platform. Google’s equivalent on Android is faster but less accurate. Samsung’s is more capable in some ways but requires too many taps to access. Apple nailed the interaction design.
Siri: The Elephant in the Room
Now for the uncomfortable part.
Siri was supposed to be the marquee feature of Apple Intelligence. The keynote showed Siri understanding complex, multi-step requests. Finding a photo from last Tuesday’s dinner and texting it to the person who was sitting across from you. Pulling information from emails and calendar events to answer contextual questions. Actually being useful.
One year later, Siri is better. I want to be clear about that. The natural language understanding has improved measurably. You can speak more conversationally and Siri will usually parse your intent correctly. The response time is faster. The voice sounds more natural. These are real improvements.
But the promise was transformational, and what shipped was incremental.
The contextual awareness that was demoed so impressively at WWDC is spotty at best. Siri can sometimes reference something you mentioned in a previous query, but only within the same conversation. Close the session, and context evaporates. The “personal context” feature exists in a limited form — it knows your family members and can usually figure out which “meeting” you mean. But it cannot piece together information from different apps to answer complex questions.
I tested this repeatedly. “When is my next meeting with Sarah about the Q3 budget?” The keynote showed exactly this kind of query working flawlessly. In reality, Siri responds with a list of all upcoming meetings and suggests I look through them.
The App Intents framework, which enables third-party Siri integration, remains limited. First-party apps work reasonably well — reminders, messages, timers, HomeKit. But most developers haven’t adopted the new framework, and those who have report that the review process is slow and the API is restrictive.
Here’s what genuinely frustrates me. Google Assistant has been doing most of this for years. Not perfectly, not privately, but functionally. When I ask my Pixel to find photos from a specific event and share them, it works. When I ask Siri, it usually doesn’t.
The charitable interpretation is that Apple is prioritizing privacy over capability, and that’s a legitimate trade-off. The uncharitable interpretation is that Siri’s architecture has been fundamentally broken for years and Apple Intelligence was a coat of paint on a crumbling wall. The truth is probably somewhere in between, but closer to the uncharitable end than Apple would like to admit.
Image Playground and Genmoji: Fun but Shallow
Image Playground was the feature that got the most attention at launch. The idea was compelling: generate images from text prompts, directly integrated into Messages, Notes, and other apps. Apple positioned it as accessible creativity — you don’t need to be an artist to create expressive visuals.
The reality is more limited than the pitch. Image Playground generates images in three styles: Animation, Illustration, and Sketch. There is no photorealistic option. Apple was deliberate about this — they wanted to avoid deepfakes and the ethical minefield of photorealistic AI generation. I respect the decision, even if it limits the feature’s utility.
The quality of the generated images is… fine. They’re recognizable. They’re sometimes charming. My cat appeared in at least thirty Image Playground creations during testing and the likeness was occasionally close enough to be amusing. But everything looks like it came from the same artist — Apple’s model has a very specific aesthetic, and you can’t escape it.
Genmoji is the better implementation. Creating custom emoji from text descriptions is genuinely delightful. I use them in group chats regularly. They’re conversation starters. They’re inside jokes made visual. The quality is consistent, and the turnaround is fast. If Apple had positioned Image Playground as “Genmoji but bigger,” expectations would have been calibrated correctly and the reception would have been warmer.
The deeper problem is that it exists in a world where Midjourney, DALL-E, and Stable Diffusion offer vastly more powerful generation. Apple’s version is safer and more integrated — but dramatically less capable. For anyone who has used a serious image generation tool, Image Playground feels like a toy.
Comparison is inevitable, and it’s not kind to Apple.
Notification Summaries: Underrated and Underappreciated
This is the feature nobody talks about, and it might be the most valuable one Apple shipped.
Notification summaries use on-device intelligence to condense groups of notifications into a single, readable summary. Instead of seventeen separate Slack messages, you get: “Three threads active: design review feedback, sprint planning questions, and a lunch poll.” Instead of a dozen email notifications, you get: “Four messages require responses. Two are meeting invitations.”
It works. Not perfectly — I’ve seen it miscategorize urgent messages as routine, and it occasionally merges unrelated threads — but well enough that it has fundamentally changed how I interact with my phone. I check notifications less frequently because I trust the summaries to flag what matters. That’s a meaningful quality-of-life improvement.
The accuracy has improved over the past year. Early versions had a tendency to summarize news notifications in misleading ways, which created some genuinely embarrassing moments. Apple patched this quickly, and the current version is much better at preserving the tone and intent of the original notification.
Samsung’s equivalent feature, introduced with One UI 7, is comparable in quality. Google’s notification management on the Pixel is slightly better at prioritization but worse at summarization. Apple wins on integration and consistency, which is the pattern you’ll notice throughout this article.
On-Device vs. Cloud: The Privacy Reality
Apple’s pitch for Apple Intelligence leaned heavily on privacy. Most processing happens on-device. When tasks require more compute power, they’re handled by Private Cloud Compute — Apple silicon servers running in Apple data centers, with cryptographic guarantees that your data isn’t stored or accessible to Apple.
This is genuinely impressive from a technical standpoint. Apple published detailed documentation of the Private Cloud Compute architecture, invited security researchers to audit it, and the consensus among the cryptography community is that the design is sound. Your data is processed in a secure enclave, the results are returned to your device, and the server retains nothing.
But there’s a catch that Apple doesn’t emphasize. The on-device models are significantly less capable than the cloud models. When Writing Tools runs on-device, the suggestions are noticeably worse than when it uses Private Cloud Compute. When Siri processes a complex query on-device, it’s slower and less accurate than when it offloads to the cloud.
This creates an invisible trade-off. You can keep everything on-device and get worse results, or allow cloud processing and get better results while trusting Apple’s privacy guarantees. Most users don’t know this choice exists. The system makes it automatically based on task complexity, and there’s no clear indicator telling you which path was taken.
I tested this by disabling cloud processing in Settings and using Apple Intelligence for a week with on-device models only. The difference was substantial. Writing Tools caught fewer errors. Siri understood fewer queries. Image generation was slower and lower quality. The on-device experience is, being generous, about 60% as capable as the full experience.
This matters because Apple’s privacy argument is their primary differentiator. If the private experience is meaningfully worse than the non-private experience, users face a choice Apple pretends doesn’t exist. Every competitor ships cloud-first AI that’s more capable, and Apple’s response is “but ours is private” — while quietly routing your complex queries to the cloud anyway.
The privacy architecture is still better than anything Google or Samsung offers. Google processes everything in their cloud and trains on your data unless you opt out. Samsung’s approach is similarly cloud-dependent. Apple is meaningfully more private.
But “more private than Google” is a low bar. And the gap between on-device and cloud capability raises legitimate questions about whether true on-device AI is a viable long-term strategy or a marketing position that will become increasingly difficult to maintain as models grow larger.
The Competitive Landscape
Let’s put Apple Intelligence in context. Here’s where each platform stands after a year of AI features:
graph TD
subgraph Apple
A1[Writing Tools ★★★★☆]
A2[Siri ★★☆☆☆]
A3[Image Generation ★★★☆☆]
A4[Privacy ★★★★★]
A5[Notification Summaries ★★★★☆]
end
subgraph Google
G1[Writing Assistance ★★★☆☆]
G2[Google Assistant ★★★★☆]
G3[Image Generation ★★★★★]
G4[Privacy ★★☆☆☆]
G5[Smart Notifications ★★★★☆]
end
subgraph Samsung
S1[Writing Tools ★★★☆☆]
S2[Bixby/Galaxy AI ★★★☆☆]
S3[Image Generation ★★★★☆]
S4[Privacy ★★★☆☆]
S5[Notification Management ★★★☆☆]
end
Google remains the capability leader. Gemini integration across Android is deeper and more capable than anything Apple offers. Google Assistant understands context better, generates better images, and handles complex multi-step tasks that Siri simply cannot. The trade-off is privacy — Google’s AI runs on their servers, processes your data, and feeds their advertising business.
Samsung occupies an interesting middle ground. Galaxy AI features are powered by a mix of on-device processing and partnerships with Google. The result is a patchwork — some features are excellent, others feel bolted on. Samsung’s Live Translate for phone calls remains the single most impressive AI feature on any smartphone.
Apple’s advantage is coherence. When Apple Intelligence features work, they work seamlessly across the ecosystem. The same writing tools on your phone, tablet, and laptop. The same Siri improvements everywhere. The same privacy guarantees throughout. No other company offers this level of integration.
But coherence without capability is just consistent mediocrity. And that’s the uncomfortable position Apple finds itself in. They’ve built a beautiful frame and hung a painting that isn’t finished yet.
What Quietly Died
Every product launch involves features that are announced and never mentioned again. Apple Intelligence is no exception. Let me document what has quietly disappeared or been indefinitely delayed.
Personalized Memory System. The WWDC demo showed Apple Intelligence building a model of your preferences, habits, and relationships over time. This would power more intelligent suggestions and more contextual Siri responses. It exists in a rudimentary form — Siri knows your contacts and can reference recent conversations — but the comprehensive “personal intelligence” system that was demoed hasn’t materialized.
Cross-App Actions. The ability for Siri to chain actions across multiple apps was a headline feature. “Order my usual from that coffee shop and text Sarah I’ll be ten minutes late.” This works for a very limited set of app combinations. Apple-to-Apple apps can chain reasonably well. But the broad third-party integration that was demonstrated remains largely unavailable.
Advanced Photo Editing. Apple showed AI-powered photo editing that could remove objects, change backgrounds, and enhance images intelligently. The Clean Up tool shipped and works for simple object removal. The more advanced capabilities — background replacement, intelligent enhancement, style transfer — haven’t appeared.
Smart Reply Suggestions. Mail and Messages were supposed to offer contextually aware reply suggestions. Mail has basic smart replies that are slightly better than the pre-AI autocomplete. Messages has nothing beyond existing quick reactions. The demo showed nuanced, context-aware responses. What shipped is generic.
Proactive Intelligence. The vision of Apple Intelligence anticipating your needs — surfacing relevant documents before meetings, suggesting contacts based on events — was the most ambitious promise. Almost none of this has shipped. Siri Suggestions remains essentially unchanged.
The pattern is clear. Apple announced a comprehensive, interconnected AI system. What they shipped was a collection of individual features, some good, some mediocre, that don’t yet form the cohesive intelligence layer that was promised.
Why the Gap Exists
Understanding why there’s such a large gap between the announcement and the delivery requires understanding Apple’s organizational reality.
Apple is not an AI company. They are a hardware company that makes software to sell hardware. Their culture and incentives are optimized for shipping physical products with tight software integration. AI deployment requires a fundamentally different approach — rapid iteration, tolerance for imperfection, willingness to ship unfinished products and improve them publicly.
Google can ship an AI feature that’s 70% ready because their users expect iteration. Apple’s users expect polish. This creates a genuine dilemma: ship early and disappoint, or ship late and fall behind. Apple tried to thread the needle by announcing early and shipping incrementally. The result satisfies nobody.
There’s also a talent problem. Apple has been hiring aggressively in AI, but they’re competing with Google, OpenAI, Anthropic, and dozens of startups for the same researchers. Apple’s culture of secrecy is a disadvantage — top AI researchers want to publish papers and build public reputations. Apple’s approach is a harder sell than it used to be.
The privacy commitment, while admirable, also imposes real constraints. Training on user data — which Google does extensively — produces better models. Apple’s refusal to do this means their models start from a weaker position. Synthetic data and public datasets can only partially compensate. The privacy tax is real, and it’s measured in capability.
None of this excuses the gap between promise and delivery. But it explains it. Apple is fighting a structural disadvantage with organizational strengths that don’t directly address the core challenge.
The Developer Perspective
I’ve spoken with several iOS developers about their experience implementing Apple Intelligence features in their apps. The consensus is… mixed.
The App Intents framework, which enables Siri integration, is well-designed but poorly documented. Developers report spending significant time reverse-engineering behavior that should be clearly specified. The review process for App Intents is separate from the normal App Review process and adds weeks to release timelines.
The on-device model APIs are more positively received. Developers can use Apple’s foundation models for text processing, classification, and summarization within their apps. The performance is good, the APIs are clean, and the privacy guarantees extend to third-party usage. Several developers told me this is the most valuable part of Apple Intelligence from their perspective — not the consumer-facing features, but the underlying infrastructure.
The frustration is with scope. Apple controls what actions Siri can perform in third-party apps, and the allowed actions are narrow. A restaurant app can let Siri place an order, but not browse the menu or ask about ingredients. The rigidity prevents bad experiences but also prevents great ones.
What This Tells Us About Apple’s Strategy
Strip away the marketing and the keynote theatrics, and Apple’s actual AI strategy becomes visible. It’s not what they said it was.
Apple is not trying to build the best AI. They’re trying to build the most integrated AI. Their bet is that a less capable but more seamlessly embedded intelligence will win over a more powerful but fragmented one. This is the same bet they made with every product category they’ve entered — not first, not most powerful, but most cohesive.
The problem is that this strategy has a capability threshold. The iPod didn’t need to be the most feature-rich MP3 player because the core task — playing music — was simple. The iPhone didn’t need the best camera because “good enough” was genuinely good enough. But AI is different. The gap between a voice assistant that understands you and one that doesn’t isn’t a matter of degree — it’s a matter of kind. A 90% accurate assistant is useful. A 70% accurate assistant is infuriating. There’s a cliff, and Apple is dangerously close to the edge.
My prediction is that Apple will make a significant acquisition or partnership in the next twelve months. The organic approach isn’t working fast enough. Apple has the cash to buy their way to competitiveness, and the strategic pressure to do so is mounting.
The alternative is that WWDC 2027 reveals a dramatic leap forward. Perhaps Apple has been sandbagging. I’d like to believe that, but companies about to reveal transformational products don’t typically spend a year shipping mediocre updates.
The User Experience Reality
Let me describe what using Apple Intelligence actually feels like in daily life, because the gap between feature lists and lived experience is enormous.
Most mornings, I pick up my iPhone and check notification summaries. They’re helpful. I respond to a few messages using Writing Tools to tighten my prose. That works well. I might ask Siri about the weather or to set a timer. That works too, as it always has.
Occasionally, I try something more ambitious. “Siri, find the article I was reading yesterday about urban planning and add it to my Reading List.” This never works. Not once in a year of trying. Siri either finds the wrong article, doesn’t understand the request, or simply apologizes and suggests I search manually.
The Image Playground icon sits in my Messages app bar, and I use it maybe once a week, usually to amuse friends. It’s entertainment, not utility. My cat has been rendered as a cartoon, an illustration, and a sketch so many times that I’ve started to feel guilty about it. She doesn’t seem to mind, though she’s more interested in the warm laptop than whatever I’m generating on its screen.
The writing assistance is the only feature I’d genuinely miss if it disappeared. Everything else is either too unreliable to depend on (Siri), too limited to be useful (Image Playground), or too invisible to notice (on-device processing).
This is the honest user experience of Apple Intelligence after one year. It’s not bad. It’s not transformational. It’s mostly fine. And for a company that built its reputation on “insanely great,” mostly fine is damning.
Generative Engine Optimization
There’s a dimension to Apple Intelligence that gets almost no attention but has significant implications: how it interacts with search and content discovery.
Apple Intelligence summarizes web pages, condenses search results, and generates answers from multiple sources. This is a form of generative engine optimization — or the need for it. When AI systems mediate between users and content, the rules of discovery change.
For publishers and content creators, Apple Intelligence presents a subtle threat. If Siri can summarize an article, users don’t need to visit the article. If notification summaries condense newsletter content, users don’t need to open the newsletter. Every summarization feature is, implicitly, a traffic reduction feature.
I’ve noticed this in my own behavior. I read fewer full articles now. Not because the summaries are perfect but because they’re good enough to tell me whether the full article is worth reading. For maybe 60% of content, the summary suffices. That’s 60% fewer page views for publishers.
The SEO implications are significant. Traditional search engine optimization targets Google’s ranking algorithm. But as AI assistants increasingly mediate search — Siri, Google’s AI Overviews, Samsung’s browsing assistant — content needs to be optimized for extraction, not just ranking. Clear structure, explicit key points, front-loaded conclusions. The inverted pyramid isn’t just good journalism anymore; it’s good generative engine optimization.
Apple hasn’t published guidelines for content creators the way Google has. But the patterns are visible. Content that is well-structured, clearly written, and factually dense gets better summaries. Content that buries the lede or pads word count gets mangled.
The irony is not lost on me. I’m writing a long-form article — exactly the kind that AI summarization threatens — about the AI system that threatens it. If Apple Intelligence summarizes this piece, it will probably do a decent job. And a reader might never scroll past the summary.
That’s the world we’re building. I’m not sure how I feel about it. But I think we should be honest about what’s happening.
What Comes Next
WWDC 2027 is days away as I write this. The rumor mill suggests significant updates to Apple Intelligence, including improved Siri capabilities, expanded App Intents, and possibly a new conversational interface. I’ll believe it when I see it shipping on my devices.
The broader question is whether Apple can close the capability gap while maintaining its privacy advantage. Better AI requires more data and more compute. More privacy means less data and constrained compute. Apple’s bet on efficient on-device models and Private Cloud Compute is elegant, but it may not scale as fast as the competition’s approach.
There’s also the question of user expectations. After a year of “mostly fine,” are Apple users willing to wait another year for “actually great”? The iPhone upgrade cycle suggests yes — Apple customers are loyal and patient. But loyalty has limits, and the Android AI experience is improving faster than the iOS one.
My honest assessment, twelve months in: Apple Intelligence is a B-minus product from an A-plus company. The infrastructure is solid. The privacy architecture is genuinely best-in-class. The integration is, as always, impeccable. But the intelligence itself — the actual AI capability — is mediocre by the standards of mid-2027.
That’s not a disaster. It’s not even necessarily a failure. It might just be Apple being Apple — slow, deliberate, and eventually excellent. The first iPhone was missing copy-and-paste and an app store. The first Apple Watch was slow with limited third-party support. Apple has a history of shipping incomplete products and refining them into category leaders.
But AI moves faster than hardware. The competitive landscape isn’t waiting for Apple to catch up. And the gap between the keynote and the kitchen — between what Apple shows and what Apple ships — is wider than it’s been in years.
I’ll keep using Apple Intelligence. Not because it’s the best AI available to me, but because it’s the most integrated into the ecosystem I’ve chosen. That’s Apple’s real moat — not technology, but lock-in. And until someone builds a bridge across that moat, Apple can afford to be behind on AI capability.
Whether that’s a sustainable strategy is a question for next year’s retrospective. Apple Intelligence is a promise partially kept, a vision partially realized, and a reminder that the most important word in “artificial intelligence” is still the second one.
My cat, for what it’s worth, remains unimpressed by all of it. She’s been staring at me from the other side of the desk for the past hour, waiting for her dinner. No amount of artificial intelligence will convince her that my writing schedule should take priority over her feeding schedule.
She’s probably right.


















