A lightweight web application for interactive visualization and region-of-interest (ROI) analysis of spatial transcriptomics data.
The app enables exploration of precomputed 10x Genomics Visium datasets directly in tissue context, without running computationally heavy pipelines online. It is designed for fast, intuitive gene- and region-level exploration.
AI integration:
- Gene summary: powered by Ollama (Mistral 7B by default).
- Chatbox: same LLM backend as gene summary.
- Spatial domains: ChatSpatial MCP with SpaGCN when available, plus a Scanpy fallback.
This application was developed as part of the AI Dev Tools Zoomcamp by DataTalks.Club, a free course focused on building AI-powered applications with modern development tools and best practices.
Backend:
π¦ 50% vibe-coded with Github Copilot: Django β’ Python 3.12 β’ Django REST Framework β’ SQLite
Bioinformatics & Analysis:
𧬠Scanpy ⒠Squidpy ⒠Matplotlib
DevOps & Deployment:
π³ Docker β’ Docker Compose
CI/CD:
π GitHub Actions
LLM:
π§ Ollama (Mistral 7B by default)
Frontend:
β¨ 99% vibe-coded with Lovable: React 18 β’ TypeScript β’ Vite β’ shadcn-ui β’ Tailwind CSS
- 𧬠Preloaded Datasets: Squidpy demo datasets (Mouse Brain Visium) loaded at startup
- 𧬠Interactive Visualization: Tissue H&E images with spatial gene expression overlays
- 𧬠Gene Exploration: Query individual genes, view statistics, and spatial patterns
- 𧬠ROI Analysis: Draw custom regions, compute expression statistics and cluster composition
- 𧬠Gene Metadata (Ensembl): Fetches Ensembl ID, species, and description via the Ensembl REST API
- 𧬠AI Integration: Ollama-powered gene summaries, chat, ROI suggestions, and optional ChatSpatial MCP spatial domains
# Clone and start
git clone https://github.com/katwre/Spatial_explorer_api.git
cd Spatial_explorer_api
docker-compose up --build
# Access the application
# Frontend: http://localhost:5173
# Backend API: http://localhost:8000/apiBackend:
cd backend
pip install -r ../requirements.txt
python manage.py migrate
python manage.py runserver 8000Frontend:
cd frontend
npm install
npm run dev
# Runs on http://localhost:5173backend/
βββ manage.py
βββ api/
β βββ views.py # REST endpoints
β βββ urls.py # URL routing
β βββ services/
β βββ squidpy.py # Dataset loading & analysis
β βββ chatspatial.py # Local SpaGCN/Scanpy helper
β βββ mcp.py # Ollama + ChatSpatial MCP integration
βββ spatial_explorer/ # Django settings
datasets/ # Visium data (mounted)
frontend/
βββ src/
β βββ components/
β β βββ SpatialViewer.tsx # Interactive canvas
β β βββ ControlPanel.tsx # Gene selection UI
β β βββ ResultsPanel.tsx # Stats display
β βββ lib/api.ts # Backend integration
βββ vite.config.ts
integration/ # Integration tests
βββ test_gene_workflow.py
βββ test_roi_workflow.py
βββ test_api_contract.py
βββ test_database_persistence.py
βββ test_end_to_end_workflow.py
| Endpoint | Method | Description |
|---|---|---|
/api/genes/ |
GET | List all gene names |
/api/gene/<name>/stats/ |
GET | Gene statistics (mean, % expressing) |
/api/gene/<name>/summary/ |
GET | AI-generated summary (Ollama) |
/api/spots/ |
GET | Spatial coordinates + expression |
/api/tissue-image/ |
GET | H&E tissue image (base64) |
/api/roi/stats/ |
POST | ROI statistics & cluster composition |
/api/spatial-domains/ |
POST | Spatial domain identification (ChatSpatial MCP + fallback) |
/api/chat/ |
POST | Chat with LLM (Ollama) |
Full API specification: openapi.yaml
Backend:
DATASET_PATHβ Path to Visium dataset (default:/app/datasets/mouse_brain)DATABASE_URLβ Postgres connection string (optional, defaults to SQLite)OLLAMA_URLβ Ollama server URL (default:http://ollama:11434in Docker)OLLAMA_MODELβ Model name (default:mistral)
If you run the backend outside Docker with a local Ollama instance, set OLLAMA_URL=http://localhost:11434.
Frontend:
VITE_API_URLβ Backend API endpoint (default:http://localhost:8000/api)
# Use custom dataset
DATASET_PATH=/path/to/dataset docker-compose up
# View logs
docker-compose logs -f backend
docker-compose logs -f frontend
# Stop services
docker-compose downThis project ships as Docker images, so you can deploy the backend + frontend with managed container services like AWS ECS/Fargate or GCP Cloud Run. Build and push the images (from Dockerfile.backend and Dockerfile.frontend) to ECR or Artifact Registry, then run two services (frontend and backend) with environment variables set (DATASET_PATH, OLLAMA_URL, OLLAMA_MODEL, VITE_API_URL). If you want LLM features in the cloud, run Ollama on a GPU-capable VM or managed GPU service and point OLLAMA_URL to it; otherwise the app still works without AI features.
Comprehensive test suite covering key workflows, database interactions, and API contracts.
# Run all integration tests
python backend/manage.py test integration
# Run specific test module
python backend/manage.py test integration.test_gene_workflow
python backend/manage.py test integration.test_roi_workflow
python backend/manage.py test integration.test_api_contractTest Coverage:
- Gene Workflows: List retrieval, statistics, spot expression, AI summaries
- API Contract: Endpoint accessibility, CORS, JSON format, error handling
- Database Persistence: Dataset loading, data consistency, SQLite/Postgres compatibility
- End-to-End: Complete user workflows from gene selection to visualization
Example: ROI Analysis Request
curl -X POST http://localhost:8000/api/roi/stats/ \
-H "Content-Type: application/json" \
-d '{
"x_min": 2000, "x_max": 6000,
"y_min": 2000, "y_max": 6000,
"gene_name": "Gad1"
}'Response:
{
"roi_statistics": {
"spot_count": 42,
"area": 105.0,
"mean_expression": 0.440,
"percent_expressing": 100.0
},
"gene": "Gad1",
"predicted_region": "7",
"cluster_composition": {"7": 85.5, "3": 14.5}
}cd frontend
npm test # Vitest unit testsThe project includes an automated CI/CD pipeline that runs on every push and pull request to the main branch.
Pipeline Stages:
- Test π§ͺ
- Sets up Python 3.12 environment
- Installs dependencies with caching
- Runs Django migrations
- Executes integration tests
- Reports test results
-
Build ποΈ
- Builds backend Docker image (if tests pass)
- Builds frontend Docker image (if tests pass)
- Uses Docker BuildKit caching for faster builds
-
Deploy π
- Triggers deployment after successful tests and builds
- Ready to integrate with Render, AWS, GCP, or Docker Hub
- Configuration templates included in workflow file
Workflow File: .github/workflows/ci-cd.yml
To View Pipeline Status:
- Check the "Actions" tab in your GitHub repository
- Look for the β or β badge next to commits
To Enable Cloud Deployment:
-
For Render:
# Add secrets to GitHub repository settings: RENDER_API_KEY=<your-render-api-key> RENDER_SERVICE_ID=<your-service-id>
Then uncomment the Render deployment step in the workflow.
-
For Docker Hub:
# Add secrets: DOCKER_USERNAME=<your-username> DOCKER_PASSWORD=<your-token>
Then uncomment the Docker Hub push steps in the workflow.
Manual Trigger:
# Push to main branch triggers the full pipeline
git push origin mainSee AGENTS.md for detailed AI-assisted development process, MCP integration architecture, and dataset loading pipeline.
MIT License - see LICENSE for details.
- Built with Squidpy and Scanpy
- Frontend scaffolded with Lovable (Claude 3.5 Sonnet)
- Backend developed with assistance from GPT-4.1
- Part of DataTalks.Club AI Dev Tools Zoomcamp
