Secure and Explainable Multi-Cancer Classification Platform Live demo: https://oncosecure-ai.vercel.app/
An educational AI decision-support MVP that combines deep-learning-style medical image classification with Grad-CAM explainability and privacy-first design. Built as a portfolio-ready capstone demonstration.
OncoSecure AI is a web application that demonstrates how a modern, explainable, privacy-conscious medical-AI interface can be structured. It lets a user pick one of three cancer categories, upload a medical image, and receive a predicted class, a confidence score, a short model interpretation, and a simulated Grad-CAM heatmap showing the regions that influenced the prediction.
The inference layer ships as deterministic mock logic so the project runs instantly with no model weights, but the architecture is designed so a real model endpoint can be wired in with a single function swap.
Medical imaging AI has advanced rapidly, but two frictions remain in real-world adoption:
- Opacity. Clinicians and users cannot easily verify why a model predicted what it did.
- Privacy exposure. Medical images are sensitive; many demo applications casually persist them to disk or send them to third-party services.
OncoSecure AI is a small, focused demonstration of how to address both concerns in a clean product flow — suitable for academic presentation and portfolio use.
- Three specialized modules for brain tumor, breast cancer, and lung cancer classification
- Drag-and-drop image upload with client-side and server-side validation
- Real-time analysis flow with loading states and reset controls
- Predicted class + per-class probability distribution
- Confidence scoring with visual indicators
- Human-readable model interpretation tailored to each cancer × class combination
- Simulated Grad-CAM heatmap overlaid on the uploaded scan
- Analytics dashboard with weekly trends, category distribution, and confidence histograms
- Privacy & security messaging surfaced throughout the UI
- Clean, production-grade architecture with reusable components
| Module | Modality | Output classes |
|---|---|---|
| Brain Tumor | MRI | Glioma, Meningioma, Pituitary Tumor, No Tumor |
| Breast Cancer | Histopathology / Ultrasound | Benign, Malignant |
| Lung Cancer | CT | Normal, Benign, Malignant |
- Next.js 14 (App Router) — React framework
- TypeScript — type safety
- Tailwind CSS — styling
- shadcn/ui patterns — accessible primitives (Button, Card, Badge, Progress)
- Recharts — dashboard charts
- Lucide — icons
- Next.js API Routes — mock inference endpoint
oncosecure-ai/
├── app/
│ ├── layout.tsx # Root layout (navbar + footer + fonts)
│ ├── page.tsx # Landing page
│ ├── globals.css # Design tokens + Tailwind
│ ├── analyze/
│ │ └── page.tsx # Analyze flow (upload + inference + results)
│ ├── dashboard/
│ │ └── page.tsx # Stats, charts, recent analyses
│ ├── about/
│ │ └── page.tsx # Motivation, XAI, security, disclaimer
│ └── api/
│ └── analyze/
│ └── route.ts # Mock inference endpoint (POST)
├── components/
│ ├── navbar.tsx
│ ├── footer.tsx
│ ├── category-selector.tsx # 3-card cancer picker
│ ├── upload-area.tsx # Drag-and-drop with validation
│ ├── result-card.tsx # Prediction + confidence + probabilities
│ ├── explainability-panel.tsx # Simulated Grad-CAM overlay
│ ├── privacy-notice.tsx # Privacy & security messaging
│ ├── stat-card.tsx # Dashboard stat tiles
│ └── ui/ # shadcn-style primitives
│ ├── button.tsx
│ ├── card.tsx
│ ├── badge.tsx
│ └── progress.tsx
├── lib/
│ ├── utils.ts # cn() helper
│ ├── categories.ts # Cancer category definitions
│ └── mock-data.ts # Mock dashboard data
├── public/ # Static assets
├── tailwind.config.js
├── tsconfig.json
├── next.config.js
├── package.json
└── README.md
- Node.js 18.17+ or 20+
- npm, pnpm, or yarn
# 1. Clone the repository
git clone https://github.com/<your-username>/oncosecure-ai.git
cd oncosecure-ai
# 2. Install dependencies
npm install
# 3. Start the dev server
npm run dev
# 4. Open http://localhost:3000npm run dev # Start dev server on :3000
npm run build # Production build
npm run start # Start production server
npm run lint # Run Next.js linterThe analyze page walks the user through a three-step flow, with each step gated until the previous one is complete:
- Select a cancer category. The user picks brain, breast, or lung. This determines which class schema the analysis will use.
- Upload a medical image. JPG, PNG, or WebP under 8 MB. The file is read as a data URL for preview and validated on the client.
- Run inference. The file is POSTed to
/api/analyzeas multipart form data. The server re-validates the upload, runs the mock inference, and returns a structured result.
On success, the page renders:
- A ResultCard with the predicted class, confidence bar, per-class probability breakdown, and a short model interpretation.
- An ExplainabilityPanel with the original scan side-by-side against a simulated Grad-CAM heatmap overlay.
This MVP does not ship real model weights. Instead, app/api/analyze/route.ts implements a deterministic mock:
- A 32-bit hash is computed over the first 4 KB of the uploaded file.
- The hash seeds a pseudo-random distribution that selects a predicted class and generates plausible confidence + per-class probabilities.
- The same file always produces the same result (useful for demos and reproducibility).
- A small artificial delay (≈1s) is added to mimic realistic model latency.
The API route includes an explicit integration block with the expected shape of a real model server response. To swap in a real model:
- Add a
modelEndpointfield to each category inlib/categories.ts. - In
route.ts, replace therunMockInference(...)call with afetchto your endpoint. - Map the server response (predicted class, probabilities, optional Grad-CAM base64 PNG) into the existing result shape.
- In
components/explainability-panel.tsx, replace the generated SVG heatmap with<img src={heatmapUrl} />.
The UI layer requires no changes.
Even in an MVP, the platform demonstrates several defense-in-depth practices:
- Strict upload policy. Only JPG, PNG, and WebP are accepted, with an 8 MB size cap.
- Client + server validation. Validation is enforced both in the upload component and again in the API route, so a malicious client cannot bypass checks.
- In-memory processing. Uploads are handled as
ArrayBufferin the request lifecycle; no images are written to disk in this build. - No PHI retention. The platform does not collect, store, or link any patient identifiers.
- No third-party uploads. Inference stays within your deployment; nothing is shipped to external services.
- Transparent messaging. Privacy notes appear inline on the analyze page and in the about page, so users understand how their data flows.
For a production deployment, additional hardening would include rate limiting, authenticated sessions, audit logging, and magic-byte validation (not just MIME-header trust).
OncoSecure AI is an educational AI decision-support project, not a real medical diagnosis tool.
It is intended for coursework, portfolio evaluation, and academic presentation. It is not a registered medical device, has not undergone clinical validation, and must not be used for diagnostic or treatment decisions. All clinical judgments must be made by qualified healthcare professionals using validated tools and complete clinical context.
- Integrate a real convolutional model (e.g., fine-tuned ResNet/EfficientNet) served via FastAPI or ONNX Runtime.
- Replace the simulated Grad-CAM with a genuine gradient-based attention overlay computed server-side.
- Add authentication and per-user analysis history backed by a real database.
- Support multi-image batch analysis.
- Add audit logging and rate limiting on the inference endpoint.
- Add internationalization (i18n) — at minimum, English + Bahasa Indonesia.
- Expand to additional cancer categories (skin, cervical, colorectal).
- Introduce model uncertainty estimation via Monte Carlo Dropout.
Placeholder — replace with real screenshots once deployed.
| Landing | Analyze | Dashboard |
|---|---|---|
./docs/screenshots/landing.png |
./docs/screenshots/analyze.png |
./docs/screenshots/dashboard.png |
Muhammad Abrar Rayhan
Information Systems · Telkom University Jakarta
Email: abrarrayhan8@gmail.com · LinkedIn: https://www.linkedin.com/in/Muhammad-abrar-rayhan
Released under the MIT License. See the LICENSE file for details.
Built with care for academic presentation, GitHub portfolio, and internship showcase.