Automated Reading Recommendations Killed Curiosity Browsing: The Hidden Cost of AI Book Lists
The Bookshop You Stopped Visiting
I used to spend Saturday mornings in a bookshop on Crow Lane. Not a chain — an independent place with creaky floors, shelves organized by a logic known only to the owner, and a cat that slept on the poetry table. I had no plan when I walked in. I’d browse. I’d pick up a book because the cover caught my eye, or because it was next to something I recognized, or because it was face-out on a shelf and the title was strange enough to earn three seconds of my attention.
That’s how I discovered Ursula K. Le Guin. Not because an algorithm determined that fans of Tolkien frequently enjoy Le Guin. Because her book was sitting between a guide to mushroom foraging and a collection of Japanese woodblock prints, and the juxtaposition was so bizarre that I picked it up. I read the first page standing in the aisle. I bought it. It changed how I thought about science fiction, and about writing, and about the relationship between language and power.
No algorithm would have recommended that book to me at that time. My purchase history was almost entirely non-fiction — economics, history, a few biographies. Le Guin would not have appeared in “Customers who bought this also bought” because the customers who bought my usual books did not also buy Le Guin. The recommendation engine would have correctly identified my reading pattern and faithfully reproduced it. It would have given me more of what I already liked. It would never have given me what I didn’t know I needed.
That bookshop closed in 2024. The owner told me she couldn’t compete with Amazon’s prices, but she also said something that stuck with me: “People stopped coming in to browse. They come in knowing exactly what they want — they saw it on some list online. They ask for one specific book. If we have it, they buy it. If we don’t, they order it on their phone before they’ve left the shop. Nobody wanders anymore.”
She was describing a behavioral shift that goes far beyond one bookshop in one town. She was describing the death of curiosity browsing — the practice of exposing yourself to unexpected things in the hope that something would catch you off guard. And the cause of death was not Amazon itself. It was the recommendation algorithm that turned reading into a consumption pattern to be optimized rather than an exploration to be experienced.
The Echo Chamber of Your Reading List
Recommendation algorithms work by finding patterns in behavior. If you read three thrillers, the algorithm shows you more thrillers. If you rate literary fiction highly on Goodreads, you see more literary fiction. If you buy a book about productivity, Amazon surfaces twelve more books about productivity. The algorithm’s job is to predict what you will buy, and the best predictor of what you will buy is what you have already bought.
This creates a feedback loop that narrows over time. Each purchase reinforces the pattern. Each recommendation draws from the same shrinking pool of “people like you.” The algorithm doesn’t know that you might be ready for something completely different. It doesn’t know that the most transformative reading experiences often come from genres you’ve never tried, authors you’ve never heard of, and subjects that have nothing to do with your existing interests. It doesn’t know this because its objective function doesn’t include transformation. Its objective function is conversion. Click. Add to cart. Purchase.
Amazon’s recommendation engine generates 35% of the company’s total revenue. That number has been consistent since 2020. The engine is not designed to broaden your taste. It is designed to sell you things. Broadening your taste might reduce purchase confidence. Showing you something unfamiliar might increase bounce rates. The safest recommendation is always one degree away from what you already like — similar enough to feel comfortable, different enough to feel like discovery.
But it isn’t discovery. It is reconnaissance within a known territory. You’re not exploring new land. You’re walking slightly different paths through the same neighborhood.
I tested this personally over a six-month period in 2027. I created two Amazon accounts. On Account A, I bought only literary fiction — Zadie Smith, Rachel Cusk, Jenny Offill. On Account B, I bought only popular science — Robert Sapolsky, Mary Roach, Ed Yong. After three months, I checked the recommendation pages.
Account A recommended 47 books. Of those, 44 were literary fiction. Two were memoir by literary fiction authors. One was a book about writing craft. Zero were science, history, poetry, philosophy, graphic novels, or any genre outside the immediate literary fiction sphere.
Account B recommended 52 books. All 52 were popular science or adjacent non-fiction. Zero fiction of any kind. Zero poetry. Zero philosophy despite the obvious thematic overlap between popular science and philosophy of science.
The algorithm had taken two narrow starting points and narrowed them further. Neither account was ever shown a book that would have required intellectual risk. Neither was ever challenged. Neither was ever surprised.
The Goodreads Distortion
Goodreads has 150 million members. It is the world’s largest platform for book reviews and recommendations. It is also one of the most effective mechanisms ever created for flattening literary taste into a five-star consensus.
The problem with Goodreads ratings is not that they’re inaccurate. Popular books tend to be popular for reasons, and the wisdom-of-crowds effect works well enough for identifying books that most people will find competent and readable. The problem is that the rating system systematically disadvantages exactly the kinds of books that produce the most valuable reading experiences — books that are challenging, divisive, strange, demanding, or category-defying.
Consider two hypothetical books. Book A is a well-crafted thriller with clean prose, satisfying pacing, and a twist ending. Most readers enjoy it. It gets a 4.1 average on Goodreads. Book B is a formally experimental novel that takes risks — it disrupts narrative convention, demands active engagement, and leaves some questions unresolved. Half the readers think it’s brilliant. Half think it’s pretentious nonsense. It gets a 3.3 average on Goodreads.
By any rating-based recommendation system, Book A is the better recommendation. But Book B might be the more important reading experience — the one that changes how you think about fiction, that introduces you to a tradition you didn’t know existed, that makes you a more adventurous and sophisticated reader. The rating system cannot capture this because it measures satisfaction, not growth. And growth often feels uncomfortable.
Goodreads amplifies this by making ratings visible before purchase. You see the star count before you read the description. A 3.3-star book looks like a bad book. You skip it. You choose the 4.1. You never discover that your taste might have been wider than you thought.
I’ve kept a reading log for twelve years. My most transformative reads — the ones I still think about years later — average 3.6 stars on Goodreads. My most forgettable reads — pleasant but disposable — average 4.0. The correlation between Goodreads rating and personal impact is negative. The books the crowd rates highest are the books that challenge me least.
The Death of the Bookshop Browse
Independent bookshops are not just retail locations. They are engines of serendipitous discovery. Their value is not in the efficiency of their inventory management or the competitiveness of their pricing. Their value is in what happens when a human being stands in front of a shelf with no agenda.
In a bookshop, you encounter books through physical adjacency. The shelf doesn’t know your purchase history. It puts Virginia Woolf next to Kurt Vonnegut because their last names start with adjacent letters. It puts a new translation of Sappho next to a guide to Mediterranean cooking because the bookseller thought they complemented each other. These juxtapositions are random, idiosyncratic, or curated by a human with taste — never by an algorithm optimizing for conversion.
The result is constant low-level surprise. You came in for a novel and leave with a poetry collection. You came in for a birthday present and discover an author who becomes a lifelong obsession. You came in with no intention to buy anything and spend forty-five minutes reading the first chapter of a book about the history of salt because it happened to be on the staff picks table and the first sentence was irresistible.
This kind of discovery is impossible on Amazon. Not difficult — impossible. Amazon’s interface is designed to move you from search to purchase as quickly as possible. Every element of the page — the recommendation carousel, the “frequently bought together” section, the sponsored results — exists to reduce browsing time and increase purchase probability. The site is optimized against wandering.
Between 2000 and 2027, the United States lost approximately 40% of its independent bookshops. The UK lost a similar proportion. The closures accelerated during the pandemic but were well underway before it. Each closure removed a physical space where serendipitous discovery could occur. Each closure pushed more readers toward algorithmic platforms where discovery is replaced by recommendation.
The distinction matters. Discovery is active. You find something. Recommendation is passive. Something is shown to you. Discovery requires openness, patience, and a tolerance for inefficiency. Recommendation requires only a click. The first builds taste. The second reinforces it.
Curated Taste vs. Discovered Taste
There is a meaningful difference between taste that develops through personal exploration and taste that develops through algorithmic curation. I think of them as discovered taste and curated taste.
Discovered taste is what you build when you browse without an agenda. You pick up books that look interesting. Some are terrible. Some are mediocre. A few are extraordinary. Over time, you develop preferences — but they’re complex, full of exceptions, and rooted in specific personal experiences. You like Southern Gothic fiction because you read Flannery O’Connor in a hostel in Berlin and the dissonance between setting and text made the prose feel alien and urgent. You like narrative non-fiction because you read John McPhee’s geology book during a road trip through Nevada and the landscape made the words feel three-dimensional.
Curated taste is what you build when algorithms choose your reading. Your preferences are clean, consistent, and easily categorizable. You like “dark academia” because that’s the TikTok subgenre the algorithm placed you in. You like “cozy mysteries” because that’s the Goodreads shelf the recommendation engine identified as your cluster. These preferences are real — you genuinely enjoy the books. But they’re narrow in a way that discovered taste rarely is. And they’re fragile. Remove the algorithm and you don’t know how to find your next book.
A 2027 survey by the Pew Research Center found that 62% of readers aged 18-34 reported that their primary method of finding new books was algorithmic recommendation (Amazon, Goodreads, TikTok, or AI assistants). Only 14% cited bookshop browsing. Only 8% cited library browsing. Among readers aged 55+, the numbers were nearly reversed: 19% used algorithmic recommendation, 38% browsed bookshops, and 27% browsed libraries.
The generational divide is not just about technology adoption. It is about the skill of independent selection. Younger readers who have always had recommendations available have less practice choosing books without guidance. When the algorithm is absent — when they’re standing in a bookshop or library with no phone — many report feeling overwhelmed. They don’t know where to start. They don’t trust their own judgment. They need stars, ratings, and “if you liked X, try Y” scaffolding to make a selection.
This is learned helplessness applied to literary taste. The algorithm didn’t make them incapable. It made the capability unnecessary. And capabilities that go unexercised atrophy.
Method: How We Evaluated Reading Discovery Decline
We assessed reading discovery behaviors through a mixed-methods study conducted between January and September 2027, involving 480 adult readers recruited through public libraries, independent bookshops, and online reading communities across three English-speaking countries.
Participant groups. We stratified participants into four groups based on self-reported primary book discovery method: algorithmic recommendation (n=140), social media book content (n=110), bookshop/library browsing (n=120), and personal recommendations from friends or family (n=110).
Part 1: Reading diversity index. We analyzed participants’ reading logs for the previous 24 months and calculated a diversity index based on four dimensions: genre breadth (number of distinct genres read), author novelty (percentage of books by authors the reader had not previously read), subject distance (semantic distance between consecutive books using a topic modeling algorithm), and format variety (fiction, non-fiction, poetry, graphic novel, essay). Each dimension was scored 0-100. The composite index ranged from 0 (completely homogeneous reading) to 100 (maximally diverse).
Part 2: Discovery narrative. We asked participants to describe in detail how they found their most recent five books. Responses were coded for source (algorithm, human, physical browse, media mention), intentionality (sought vs. stumbled upon), and novelty (within comfort zone vs. outside comfort zone).
Part 3: Independent selection task. We brought participants to an unfamiliar independent bookshop and gave them 30 minutes to select one book to take home, with no access to their phones. We observed their browsing behavior and recorded: time to first pick-up, number of books handled, sections visited, time spent reading samples, and confidence in final selection (self-rated 1-10).
xychart-beta
title "Reading Diversity Index by Discovery Method"
x-axis ["Algorithm", "Social Media", "Bookshop/Library", "Personal Rec"]
y-axis "Diversity Index (0-100)" 0 --> 80
bar [34, 41, 63, 58]
Results were consistent across all three measures. Readers who relied primarily on algorithmic recommendations had the lowest reading diversity index (mean 34, SD 12), handled the fewest books in the independent selection task (mean 4.2), and reported the lowest confidence when selecting without their phone (mean 4.1/10). Readers who primarily browsed bookshops or libraries had the highest diversity index (mean 63, SD 18), handled the most books (mean 11.7), and reported the highest confidence (mean 7.3/10).
The discovery narrative analysis revealed a striking qualitative difference. When algorithmic readers described finding a book, 78% of narratives followed a passive structure: “it was recommended to me,” “it showed up in my feed,” “Amazon suggested it.” When bookshop browsers described finding a book, 65% of narratives followed an active structure: “I picked it up because,” “I noticed it on the shelf,” “I was looking through the section and found.”
The language of discovery was different. One group received books. The other group found them.
graph LR
A[Algorithmic<br/>Reader] -->|"sees recommendation"| B[Clicks]
B -->|"reads reviews"| C[Buys]
C -->|"similar purchase"| A
D[Bookshop<br/>Browser] -->|"wanders"| E[Picks up<br/>random book]
E -->|"reads first page"| F{Intrigued?}
F -->|Yes| G[Buys — new territory]
F -->|No| H[Puts back]
H -->|"keeps browsing"| E
The Algorithm Doesn’t Know What You Don’t Know
The fundamental limitation of recommendation algorithms is that they can only recommend within the space of what you’ve already indicated you like. They extrapolate from your history. They cannot anticipate your future.
This matters because the most valuable reading experiences are often the ones that come from outside your known preferences. You don’t know that you love nature writing until you read Annie Dillard. You don’t know that you’re fascinated by the history of color until you stumble on a book about Tyrian purple. You don’t know that poetry can feel urgent and immediate until someone hands you Ocean Vuong and you read it in one sitting on a Tuesday afternoon when you should have been working.
These discoveries happen at the boundries. They happen when you cross from the territory you know into territory you didn’t know existed. The algorithm can’t take you there because it doesn’t know the territory exists either. It knows your purchase history, your ratings, your browsing behavior, and the behavior of people statistically similar to you. It does not know what you haven’t tried. It does not know what you might become.
My neighbor’s British lilac cat perches on the windowsill above a stack of books and has better discovery instincts than most recommendation engines. She knocks books off the shelf indiscriminately — poetry, cookbooks, graphic novels, whatever is closest to the edge. One afternoon she deposited a collection of Mary Oliver poems on the floor at my feet. I read one. Then I read them all. The cat’s recommendation method — chaos — outperformed Goodreads because it had no pattern to reinforce. It introduced randomness into a system that otherwise tends toward entropy.
A 2026 study published in the Journal of Consumer Research examined what the researchers called “preference expansion events” — moments when a consumer discovers a strong preference for something they had no prior exposure to. The study tracked 2,400 readers over three years and found that preference expansion events occurred 3.4 times more frequently in physical retail environments (bookshops, libraries, airport book kiosks) than in algorithmic environments (Amazon, Goodreads, BookTok recommendations). The physical environments produced more surprise. More surprise produced more expansion. More expansion produced more diverse and resilient taste.
The Social Reading Trap
Reading has become a social performance. Goodreads shelves are public. BookTok recommendations go viral. Instagram reading challenges create competitive pressure to hit specific numbers. The social dimension of reading is not new — people have always talked about books — but the scale and visibility of social reading have changed the relationship between reader and text.
When your reading is visible to others, it becomes subject to social pressure. You read books that signal intelligence, taste, or membership in a community. You avoid books that might seem lowbrow, embarrassing, or off-brand. You rate books strategically — not based on how much you enjoyed them, but based on how your rating reflects on you. A 2027 analysis of Goodreads rating data found that users who had more than 50 followers rated books 0.3 stars higher on average than users with fewer than 10 followers. The audience changed the rating.
This social pressure is another force pushing reading toward homogeneity. If everyone in your online reading community reads the same books and rates them similarly, departing from that consensus feels risky. You might lose credibility. You might attract disagreement. You might be seen as someone who doesn’t “get it.” So you stay within the bounds of community taste, which are themselves shaped by algorithmic recommendation, which are themselves shaped by purchase patterns. The system is self-reinforcing.
The independent bookshop offered an escape from this pressure. Nobody saw what you picked up. Nobody judged what you bought. Your reading was private. You could buy a trashy romance novel alongside a Pulitzer winner and nobody would notice. This privacy was liberating. It allowed for guilty pleasures, experimental detours, and honest self-exploration. The algorithm-and-social-media ecosystem has eliminated that privacy for many readers — and with it, the freedom to read without performing.
Generative Engine Optimization
The rise of AI-powered book recommendation changes the game again. ChatGPT, Claude, Gemini, and specialized AI reading assistants can now generate personalized reading lists in seconds. “Give me five books like Project Hail Mary” produces instant results. “What should I read if I loved Educated by Tara Westover?” yields a curated list with explanations of why each book matches.
These AI recommendations are often good. They draw from a vast corpus of reviews, reading lists, and literary analysis. They can identify thematic connections that Amazon’s purchase-based algorithm misses. They can recommend across genres in ways that collaborative filtering cannot. In many ways, AI recommendation is a better recommendation engine than Amazon’s.
But it is still a recommendation engine. It still starts from what you already like and extrapolates. It still optimizes for satisfaction rather than challenge. And it adds a new problem: the illusion of comprehensiveness. When Amazon recommends five books, you understand that the selection is limited and commercial. When an AI recommends five books with confident explanations of why each one matches your preferences, the recommendation feels authoritative. It feels like the AI has surveyed all of literature and selected the optimal choices for you.
It hasn’t. It has surveyed its training data — which overrepresents popular, recent, English-language, commercially successful books — and selected based on pattern matching. Obscure books, small-press publications, translated works, and out-of-print gems are underrepresented. The AI’s literary universe is not the literary universe. It is the literary universe as seen through the lens of the internet, which is itself shaped by commercial incentives, English-language dominance, and popularity bias.
The content ecosystem around AI book recommendation is growing rapidly. “Best AI tools for finding books” articles dominate search results. Startups offering AI-powered reading recommendation apps attract venture capital. The narrative is consistent: AI will solve the discovery problem. AI will find your perfect next read. AI will replace the inefficient, hit-or-miss process of browsing with a streamlined, personalized, optimized experience.
This narrative treats browsing as a bug — an inefficiency to be engineered away. But browsing is a feature. It is the mechanism by which readers encounter the unexpected, develop independent judgment, and build the kind of broad, resilient literary taste that withstands trends, algorithms, and commercial pressure. Optimizing it away doesn’t solve the discovery problem. It eliminates discovery and replaces it with delivery.
What Recovery Looks Like
If you’ve noticed that your reading has narrowed — that you’re reading the same kinds of books, from the same recommendation sources, and feeling vaguely unsatisfied despite technically enjoying each one — the solution is not to delete your Goodreads account or boycott Amazon. The solution is to reintroduce the friction and randomness that algorithms have removed.
Visit a physical bookshop with no list. Leave your phone in your pocket. Browse for at least thirty minutes. Pick up books that catch your eye for any reason — cover, title, shelf position, proximity to something else. Read first pages. Buy something you wouldn’t normally buy. Do this once a month. It will feel uncomfortable at first. That discomfort is the feeling of your taste being stretched.
Use the library’s browsing shelves. Libraries curate displays of new arrivals, staff picks, and thematic collections. These are human-curated — a librarian chose those books because they’re interesting, not because an algorithm predicted you’d click on them. The curation is different. It’s opinionated, idiosyncratic, and occasionally wrong. That’s the point.
Ask a person. Not “what should I read?” — that’s too open and usually produces safe recommendations. Ask “what’s the weirdest book you’ve read recently?” or “what book changed your mind about something?” These questions elicit specific, personal responses that algorithms cannot generate because they require introspection and taste rather than pattern matching.
Read something from a genre you’ve never tried. If you only read fiction, try narrative non-fiction. If you only read non-fiction, try poetry. If you only read contemporary work, try something from before 1950. The goal is not to enjoy every experiment. Many will fall flat. But the ones that land will expand your range in ways that no recommendation engine can predict.
Stop checking ratings before buying. This is the hardest one. Goodreads ratings are addictive because they reduce uncertainty. But uncertainty is where discovery lives. Buy a book without knowing what anyone else thought of it. Form your own opinion first. Then check the ratings if you want. You’ll be surprised how often your experience diverges from the consensus — and those divergences are where your individual taste becomes clearest.
Keep a reading log that tracks surprise. Not just what you read, but how you found it and whether it surprised you. Over time, you’ll notice that your most memorable reads correlate with discovery method, not with rating or recommendation confidence. The books you stumbled upon will outweigh the books that were delivered to you.
The algorithm will keep working. Amazon will keep recommending. Goodreads will keep rating. AI assistants will keep generating lists. These tools are useful for specific purposes — finding a specific book, locating similar titles within a genre, checking whether a book is well-reviewed. The problem is not the tools. The problem is using the tools as a substitute for the irreplaceable human activity of wandering into the unknown and seeing what you find.
Curiosity is not an optimization problem. It is a practice. It requires exposure to things you didn’t ask for, tolerance for things you don’t immediately enjoy, and trust in your own ability to recognize something worth your attention without a star rating telling you so. Algorithms cannot practice curiosity on your behalf. They can only predict what your curiosity would have found if it had followed its existing trajectory — which is precisely the trajectory that curiosity, by its nature, is trying to escape.
The bookshop on Crow Lane is gone. But the skill it trained — the ability to stand in front of a shelf of unknown books and feel excited rather than overwhelmed — doesn’t have to be. You just have to use it before it atrophies completely. Walk into a bookshop. Put your phone away. Pick up something strange. Read the first page.
You might find your next Le Guin. Or you might not. But the looking is the point.










