Research Tools & Development

UX Feedback Analyzer

A tool that turns messy user feedback into clear priorities — built for researchers who don't have time to read 1,000 reviews.

Role Designer & Developer
Timeline October 2025
Stack Python, Streamlit
Status Live on GitHub

User feedback is messy. Analyzing it takes forever.

If you're a PM or researcher, you've probably faced this: hundreds (or thousands) of user reviews, survey responses, or support tickets, and no easy way to see what actually matters.

Manual thematic coding is thorough but slow. AI summaries are fast but shallow. I wanted something in between — a tool that surfaces patterns, prioritizes issues, and explains *why* something is flagged as high-risk.

"Who has time to read 1,200 app reviews? But if you don't, you miss the signal in the noise."

What the tool does

Upload a CSV of user feedback. The tool processes it and outputs:

Churn risk scores

Each response gets a "frustration likelihood" score, so you can prioritize the users most at risk.

Segment hotspots

See which user segments (by feature used, plan type, etc.) are having the worst experience.

Top frustration keywords

The model surfaces the phrases most associated with negative feedback — so you know *what* people are complaining about.

Downloadable outputs

Export a visual report and a CSV of risk scores — ready to drop into a sprint planning doc.

How it works

The tool cleans incoming data, converts comments into numerical signals using TF-IDF, encodes user segments, and runs a regularized logistic regression to predict churn risk. It's not magic — it's a reproducible, explainable model that tells you *why* it's flagging something.

user_id, timestamp, nps, sus, feature_used, churned, comment
u_001, 2024-01-15, 3, 45, export, 1, "Export keeps failing. Very frustrated."
u_002, 2024-01-16, 8, 78, dashboard, 0, "Love the new dashboard updates!"
Python Streamlit scikit-learn pandas TF-IDF

Built-in guardrails

Privacy-first: All processing happens in-memory. Files stay on your device. Nothing gets sent to a server.

Small sample warning: Yellow banner when n < 200 — results are directional, not definitive.

Class imbalance notice: Alerts if churn < 5% or > 95% — model may not be reliable.

Reproducible runs: Deterministic seed for consistent results every time.

The result

In testing with Adobe Mobile app reviews (~1,200 responses), the tool surfaced Performance/Reliability and Export/Save as the top issues — reducing PM triage time by 40-60% compared to reading reviews manually. It doesn't replace qualitative research, but it tells you where to focus.

The tool is live

Deployed on Streamlit Cloud. Upload your own data or try the demo dataset.

View on GitHub

Back to

All Projects →