A productized paper recommendation tool that helps researchers decide what to read first, why it matters, and whether it is worth reproducing.
Paper Reader is built for a very practical research question:
- Which paper should I read first?
- Which one is actually worth deeper attention?
- Which one is more realistic to reproduce or track?
Instead of dumping a list of papers on the screen, it turns a topic into:
- ranked recommendation cards
- one-sentence takeaways
- clean detail pages
- comparison views for decision making
- lightweight follow-up lists
- It feels like a product, not just a crawler.
- It turns paper discovery into reading decisions.
- It is easy to try with Docker and your own API key.
- It supports both lightweight public usage and local-model workflows.
- search by research topic such as
LLM,RAG,Multimodal, orReasoning - get ranked paper cards instead of a raw feed
- each card shows a one-sentence takeaway and compact tags
- recommendation conclusion
- background and goal
- summary and method notes
- reproducibility evidence
- normalized PDF download naming
- compare 2 to 3 papers under the same topic
- quickly judge which one deserves time first
- useful for reading, reproduction, inspiration, and related work decisions
- save papers into reading list
- save papers into reproduction list
- save papers into topic candidate list
- persist follow-up items in the backend database
This repo is optimized for the lightest onboarding path first.
You do not need Ollama, a local model download, Node.js, or Python just to try the product.
Run:
docker compose up --buildThen open:
- Frontend: http://localhost:3000
- Backend docs: http://localhost:8000/docs
Inside the app:
- Open
Model Settings - Choose
DeepSeek,Kimi,Qwen, or another OpenAI-compatible API - Paste your own API key
- Enter a topic like
LLM,RAG,Reasoning, orMultimodal - Start using the product
This is the default public onboarding path.
Best for first-time users.
- Docker
- your own API key
- no Ollama required
- no local model download required
Best for users who explicitly want local inference.
Optional:
ollama pull qwen2.5:7bThen use:
- provider:
ollama - model:
qwen2.5:7b - base URL:
http://localhost:11434
Use this only if you want to edit the code.
Frontend:
cd paper-reader-ui
npm install
npm run devBackend:
cd paper-reader-v1
py -3.11 -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txt
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000For a stronger GitHub page, add these screenshots to the README later:
- Homepage recommendations
- Paper detail page
- Compare page
- Research toolbox / follow-up view
If you add screenshots, place them under docs/ or docs/assets/ and link them here.
- Next.js
- TypeScript
- FastAPI
- SQLite
- OpenAI-compatible API providers
- optional Ollama local inference
This project is intentionally kept lightweight for:
- demos
- GitHub sharing
- local product showcase
- fast first deployment
Advanced users can later replace it with an external database if they want larger-scale persistence.
See:
paper-project/
|- README.md
|- LICENSE
|- .gitignore
|- docker-compose.yml
|- docs/
|- paper-reader-ui/
|- paper-reader-v1/
`- paper-reader skill/
This repo is not just a code dump. It is intended to show product thinking:
- ranking instead of raw listing
- decision support instead of paper collection
- follow-up workflow instead of one-time browsing
- deployability instead of environment-heavy prototypes
MIT