
Case Study: Rethinking Book Reviews
Case Study: Rethinking Book Reviews/Reccomendations

Overview
Goodreads is a widely used platform for tracking reading progress, finding new books, and sharing reviews. However, many users skip leaving reviews and express dissatisfaction with book recommendations. This case study explores why that happens and how improving the review process can enhance book discovery and user engagement.
Objective
Investigate how users interact with book review features and recommendations on Goodreads (or similar platforms), identify friction points, and propose user-centered improvements that make leaving reviews easier and more rewarding—ultimately enhancing personalized book discovery.
Research Methods
- Survey: Targeted current or past users of Goodreads and similar platforms. Focused on reading habits, review behavior, satisfaction with recommendations, and platform frustrations.
- Affinity Mapping: Used to group and synthesize pain points, patterns, and motivations across responses.
💡Key Findings
1. Reviews Aren’t Part of the User Habit
Most users don’t leave reviews after finishing a book—primarily due to:
- Forgetting
- Not valuing their opinion
- Confusion about how to write or format a review
- Unclear purpose or benefit of reviewing
“I don’t feel like my opinion really matters.”
“I forgot. It didn’t seem that important.”
2. Book Recommendations Feel Generic
Although users rely on the platform to discover new reads, 70% were only neutral or dissatisfied with their recommendations. They often didn't understand how these suggestions were generated.
“I’m not sure what they’re basing it on.”
“I mostly just use Goodreads to track what I read.”
3. The Platform’s Design and Flow Are Frustrating
Pain points included:
- Clunky or outdated interface
- Limited social features
- Bugs and slow loading
- Lack of clear calls to action around reviews
Insights & Opportunities
1. Users Don't Value Their Opinions… Yet
We need to communicate the purpose of reviews more clearly, and demonstrate how they contribute to better recommendations.
2. The Review Process is Intimidating
People skip reviews not because they don’t care—but because the experience doesn’t support them.
3. Discovery Feels Random
Without feedback loops, users don’t feel invested in their book recommendations. They don’t see how to influence or improve them.
🔄 Proposed Solutions
✅ 1. Tag-Based Review Interface
Replace the intimidating blank review box with clickable buttons that let users quickly describe what they liked.
Examples:
- “Strong Characters”
- “Unexpected Plot Twist”
- “Emotional”
- “Beautiful Writing”
This lowers the barrier to entry and gives the platform richer data to improve recommendations.
✅ 2. Personalized Taste Profile (Pie or Radar Chart)
Give users a visual snapshot of their preferences based on past reviews.
“You tend to read books with: Complex Characters (40%), Mystery (30%), Romance (20%).”
This builds insight and connection—and encourages continued engagement.
✅ 3. Recommendation Shelf Based on Review Tags
Create a new section called:
"Books You’ll Love (Based on Your Reviews)"
These recommendations are clearly tied to the review tags users have selected. Each recommendation comes with a short explainer:
“Because you liked books with strong female leads and plot twists.”
✅ 4. Optional Swipeable Quiz for Taste Discovery
Borrowing from the Tinder model, new users could take a visual quiz by swiping through genres, themes, or book blurbs to help build an initial “taste profile.”
🧪 Prototype Sneak Peek
Mockups were created to illustrate the mobile and desktop experience of:
- The tag-based review screen
- Taste profile visualization
- Personalized review-based recommendation shelf
(See visuals above)
✨ Impact
These changes encourage more users to leave reviews by:
- Making the process easier and more enjoyable
- Reinforcing the value of their input
- Building trust in book recommendations
By connecting the dots between reviews and discovery, the platform can increase both user satisfaction and engagement.