Build AI agents on the Life Programmable Interface.
DEADLINE PASSED: April 20, 2026. Submissions are now being reviewed. L4 PRs still accepted — push yourself.
Results: 53 scored | 47 at L3 | 4 merged L4 | 4 open L4 PRs
Top performers: See scores.json for full leaderboard. Share your achievement on LinkedIn — tag @WINNIIO, @Nicolas Waern, @Life Atlas.
The LPI is an open MCP server with 7 tools exposing the SMILE methodology — a digital twin implementation framework. This repo is your sandbox AND the entry point for the LifeAtlas Contributor Program.
┌─────────────────────────────────────────────────────────┐
│ 1. FORK this repo │
│ 2. CLONE your fork locally │
│ 3. RUN: npm install && npm run test-client │
│ 4. DO the challenge for your level (see below) │
│ 5. SUBMIT a Pull Request back to this repo │
│ 6. BOT auto-validates → auto-merges (Level 1) │
│ or auto-scores → posts feedback (Level 2/3) │
│ 7. LEADERBOARD updates automatically │
│ → life-atlas.github.io/lpi-developer-kit │
└─────────────────────────────────────────────────────────┘
Where do I submit? → Pull Request to this repo. That's it. Who reviews? → GitHub Actions bot (auto). Team leads review top submissions manually. How do I know what to do? → Pick your track below. Each level has exact instructions. What if I'm stuck? → Open an issue on this repo or ask in the Teams channel.
- MCP — a standard way for AI agents to call tools. Your agent sends a request, the server sends back data.
- A2A — a protocol for agents to discover each other and describe their capabilities.
- LLM — a Large Language Model (ChatGPT, Claude, or local models via Ollama).
- Ollama — free software to run AI models on your laptop. No cloud, no API key, no cost.
- SMILE — our methodology for building digital twins. The LPI server contains all the knowledge about it.
Requirements: Node.js 18+ and npm.
git clone https://github.com/Life-Atlas/lpi-developer-kit.git
cd lpi-developer-kit
npm install
npm run build
npm run test-clientYou should see all 7 tools pass. If they do, your sandbox is ready.
Troubleshooting:
- If
npm installfails on slow internet, try:npm install --prefer-offline --no-audit - If
npm run buildshows TypeScript errors, check your Node version:node --version(must be 18+) - On Windows, use PowerShell or Git Bash, not cmd.exe
| Tool | Description |
|---|---|
smile_overview |
Full overview of the SMILE methodology — 6 phases, 3 perspectives, AI journey |
smile_phase_detail |
Deep dive into a specific SMILE phase |
query_knowledge |
Search the knowledge base (63 entries across 11 topic areas) |
get_case_studies |
Browse 10 anonymized case studies across 10 industries |
get_insights |
Get scenario-specific implementation advice |
list_topics |
Browse all available topics in the knowledge base |
get_methodology_step |
Step-by-step guidance for implementing a SMILE phase |
Add to your claude_desktop_config.json:
{
"mcpServers": {
"lpi-sandbox": {
"command": "node",
"args": ["<full-path-to>/lpi-developer-kit/dist/src/index.js"]
}
}
}LifeAtlas is building the world's first edge-native digital twin OS — a platform where AI agents help people understand and optimize their lives using the SMILE methodology (Sustainable Methodology for Impact Lifecycle Enablement).
This is a multi-track contributor program. Not everyone writes code — some people research, design, write, test, or build 3D visualizations. Every track contributes to the same product. Every contribution is real.
Five tracks:
- Agent Builders — code, AI agents, MCP tools, data pipelines
- Content & Research — case studies, course modules, competitive analysis, grant writing
- Design & UX — UI/UX, infographics, dashboards, brand materials
- 3D & Visualization — Unity, Unreal, Three.js, spatial twins, AR/VR
- QA & Security — testing, security audits, bug hunting, accessibility
Pick the track that matches your skills. Details for each track's screening challenge are below. You can switch tracks later — this is a starting point, not a cage.
Read the full program guide: PROGRAM.md — how sprints work, how tasks get assigned, IP ownership, what you get, what we expect.
This isn't a classroom exercise. This is open-source contribution to a live product.
Five metrics. Weighted equally. Assessed continuously.
1. Contribution Quality — Does your code work? Is it clean? Small and working beats large and broken.
2. Discussion & Collaboration — Do you ask good questions? Do you help teammates? Do you engage?
3. Explainability — Can you explain what your agent does and WHY? AI that can't explain itself is AI we don't ship.
4. Initiative — Do you find problems and solve them, or wait to be told?
5. Growth Velocity — Where did you start vs. where are you now? Someone who starts at Level 1 and ships a working agent by week 3 scores higher than someone who started at Level 3 and stayed there. We measure trajectory, not starting point.
Complete as many levels as you can. Each level brings you closer to a guaranteed place on the team.
- Level 1 — You're registered and on the leaderboard
- Level 2 — You've proven you can set up and run the tools
- Level 3 — You're in. Completing Level 3 guarantees your place on the WINNIIO / LifeAtlas contributor team.
Deadline: 7 days from receiving this.
- Fork this repo
- Copy
contributors/TEMPLATE.jsontocontributors/your-name.json - Fill in your details:
{
"name": "Your Full Name",
"github": "your-github-username",
"program": "B.Tech CSE / M.Tech AI / etc",
"campus": "Amity Noida / Lucknow / etc",
"skills": ["python", "pytorch", "react"],
"interests": ["agents", "NLP", "3D", "security"],
"my_twin": "Write something real here — see below"
}About the my_twin field: There is no wrong answer. We want to know how YOU think about personal data, not what sounds impressive. Bad answer: "I would track my CGPA." Good answer: "I would track my energy levels throughout the day and correlate them with what I ate and how I slept, because I crash every afternoon and don't know why." Be specific. Be personal. This is the most interesting field in the form.
- Submit a PR. Title:
level-1: Your Name
That's it. You're registered.
Everyone runs the LPI sandbox first (all tracks):
git clone <your-fork>
cd lpi-developer-kit
npm install
npm run build
npm run test-clientCapture the output showing all 7 tools pass. Then pick your track:
Level 2 (45-90 min): Run the LPI sandbox (above) + install Ollama and run a local LLM. Submit: test client output + LLM output + 3 sentences on what surprised you about SMILE.
Level 3 (2-4 hours): Build an AI agent that connects to the LPI and does something useful. Use any approach: Python, LangGraph, CrewAI, raw API calls. Must query at least 2 LPI tools and cite its sources. Submit as a separate GitHub repo. See examples/agent.py for a working starting point.
Level 2 (30-60 min): Run the LPI sandbox (above). Read the SMILE methodology output from smile_overview and at least 2 case studies. Write a 1-page summary: how does SMILE apply to an industry YOU find interesting? (healthcare, manufacturing, energy, agriculture, smart cities, sports — your choice).
Level 3 (2-4 hours): Find 3 real-world digital twin implementations NOT already in the LPI knowledge base. For each: describe the challenge, the approach, the outcome, and which SMILE phases were applied. Sources must be cited. Submit as a structured document in submissions/your-name/.
Level 2 (30-60 min): Run the LPI sandbox (above). Read the SMILE methodology output. Sketch a visual representation of the 6 SMILE phases that a non-technical person could understand in 30 seconds. Any tool: Figma, Canva, pen and paper (photograph it).
Level 3 (2-4 hours): Design a dashboard mockup for ONE of: (a) a personal health digital twin showing sleep, nutrition, exercise, and stress, (b) a professional's client management view, or (c) a course landing page for "The Future of People: Digital Twins for Life." Submit as images or a Figma link in submissions/your-name/.
Level 2 (30-60 min): Run the LPI sandbox (above). Read about Reality Emulation (Phase 1). Write a short description of what a 3D "Reality Canvas" would look like for: a hospital, a horse stable, a smart building, or a human body.
Level 3 (2-4 hours): Build a simple 3D scene (Unity, Unreal, Godot, Three.js, or Blender) showing a digital twin concept. Must have at least one data overlay (temperature, heart rate, occupancy — whatever fits). Export a video walkthrough or a playable build. Submit a link in submissions/your-name/.
Level 2 (30-60 min): Run the LPI sandbox (above). Try to make it fail. Send unexpected inputs, long queries, special characters. Document everything — what broke, what didn't, what error messages you got. Submit as a bug report.
Level 3 (2-4 hours): We have planted at least 5 intentional security vulnerabilities in examples/vulnerable-api.py. Run it locally (pip install flask && python examples/vulnerable-api.py) and write a security audit report:
- Find as many vulnerabilities as you can
- Classify each one (OWASP category)
- Explain the impact (what could an attacker do?)
- Propose a fix for each
Also audit the LPI sandbox server itself (src/) for any real issues. Even finding nothing is valuable if your methodology is sound. Submit in submissions/your-name/.
Submit all Level 2/3 work as PRs. Title: level-2: Your Name or level-3: Your Name
Required for all Level 2/3 submissions: Include a HOW_I_DID_IT.md in your submission folder. Write in your own words:
- What you did, step by step
- What problems you hit and how you solved them
- What you learned that you didn't know before
This is not optional. We read these to understand how you think. AI-generated write-ups are easy to spot and will count against you. Write like you're explaining it to a teammate, not a professor.
Share your achievement on LinkedIn. When you reach Level 2 or 3, use the Share button on the leaderboard to post about it. This is part of the program — we want to see you communicate what you've built, not just build it. Tag @WINNIIO AB, @LifeAtlas, and @Nicolas Waern.
For Track A specifically, your agent must:
- Accept a user question or input
- Query at least 2 LPI tools to get relevant knowledge
- Process the results — summarize, analyze, combine, or visualize
- Cite which LPI tools and data it used — explainable AI means the user traces every part of the answer
Use any approach: raw Python, LangGraph, CrewAI, LangChain, any LLM (local or cloud), or no LLM at all. A working example is in examples/agent.py.
Submit as a separate GitHub repo with a link in submissions/your-name/level3.md.
PR title: level-3: Your Name
Optional bonus: Include an A2A Agent Card (agent.json) describing your agent's capabilities. See submissions/AGENT_CARD_TEMPLATE.json for the format. This is not required but demonstrates you understand how agents discover each other.
| Level 1 | Level 2 | Level 3 | |
|---|---|---|---|
| Time | 5 min | 45-90 min | 2-4 hours |
| Proves | Git basics, follow instructions | Dev setup, local AI, reading comprehension | Independent building, agent design, explainability |
| Key signal | The my_twin answer shows how you think |
Model choice + SMILE reflection shows curiosity | Does it work? Can it explain itself? |
Already finished Level 3? This is optional. It won't affect your place on the team — but it will affect how we see you.
Level 4 combines two hard problems: agent-to-agent communication and security hardening. You build a system that works AND prove it can't be broken.
Build two (or more) AI agents that:
- Discover each other using A2A Agent Cards — each agent publishes a
.well-known/agent.jsondescribing its capabilities - Communicate — Agent A asks Agent B a question, Agent B queries the LPI, returns an answer. The agents must exchange structured data, not just text.
- Do something the LPI alone can't — combine knowledge across agents. Example: one agent specializes in SMILE methodology, another in industry case studies. Together they produce a recommendation neither could alone.
- Be hardened against attack — your system must resist:
- Prompt injection (malicious inputs that try to override agent instructions)
- Data exfiltration (inputs designed to leak system prompts or internal data)
- Denial of service (inputs that cause infinite loops or resource exhaustion)
- Privilege escalation (one agent trying to make the other do something it shouldn't)
In submissions/your-name/level4/:
- Working code — both agents, runnable with clear instructions
- Agent Cards — valid A2A JSON for each agent
- Threat model — what attacks did you consider? What's the attack surface?
- Security audit — test your own system. Try to break it. Document what you found and what you fixed.
- Demo — a recording, screenshot sequence, or transcript showing the agents working together
| Criteria | Points |
|---|---|
| Agents discover each other via A2A | 3 |
| Structured data exchange (not just text) | 3 |
| Combined output adds value beyond single agent | 4 |
| Threat model covers real attack vectors | 3 |
| Self-audit finds real issues | 3 |
| Fixes are implemented, not just documented | 2 |
| Code quality, documentation, explainability | 2 |
PR title: level-4: Your Name
This is the deep end. No hand-holding. No templates. If you can do this, you're not an intern — you're a contributor.
The best submissions show real thinking in their HOW_I_DID_IT.md. These are worth reading:
| Contributor | Level | What stands out |
|---|---|---|
| Shubham Kumar | L4 | Problem-solving under constraints, production thinking |
| Daksh Garg | L4 | 10+ attack vector security audit with fixes |
| Sanskriti | L4 | Analogical reasoning across industries, research rigor |
| Shourya Solanki | L4 | External repo + working demo |
Your reflection matters. It shows us how you think — not just what you built. Write it like you're explaining to a teammate who needs to pick up your code tomorrow.
Everything you build is a module. Your own repo, your own code, plugging into the LPI. You don't edit our core platform. You build on top of it. By the end of this program, you will have built a personal backpack of AI agents — tools that connect to real data, use real methodology, and produce explainable results. This is your portfolio. It goes on your GitHub. It's yours.
Every working day. Camera on. Be prepared. Be on time.
This is where you learn the most. You will be in active discussions with your team. Come ready to:
- Show what you built (screen share)
- Explain what you're working on
- Ask for help or offer help to teammates
Don't worry about having something perfect to show — showing a broken build and asking for help is more valuable than sitting silent. The point is engagement, not perfection.
If you prefer written communication, async written updates are accepted — but you must still attend and listen. The learning happens in the discussion.
| Day | Activity |
|---|---|
| Mon-Thu | Build. Daily standup with your team. |
| Friday | Demo + Learning — team demos what shipped that week. We share cutting-edge developments in AI agents, MCP, open-source LLMs, explainable AI, and edge computing. This is your window into what the industry is building right now. |
After the first 1-2 weeks of contribution, we reassess:
- What have you actually shipped?
- What skills are emerging?
- What do YOU want to go deeper on?
You'll move into a focus team based on your performance and your preference. We want you where you're most effective AND most excited.
Teams form after the initial assessment:
- Agent Builders — build AI agents on the LPI using local and cloud LLMs
- AI/ML Pipeline — RAG, embeddings, fine-tuning, model evaluation
- Research & Content — SMILE documentation, case studies, course materials
- QA & Security — break things, find vulnerabilities, write test suites
- Data & Knowledge — knowledge graphs, data pipelines, analytics
- 3D & Visualization — spatial twins, Unreal Engine (separate brief)
- Real AI agent development experience on a production platform
- Published open-source work on YOUR GitHub — your name, your code, your portfolio
- Your personal agent backpack — a collection of working tools you built
- Experience with cutting-edge tooling: MCP protocol, open-source LLMs, explainable AI, edge-native architecture
- Weekly exposure to industry-current AI developments — not textbook material
- Letter of recommendation based on your actual contributions
- Show up. Daily standups, on time, camera on.
- Ship. Working code, every week. Small is fine. Broken is not.
- Explain. If you can't explain what your agent does and why, it's not done.
- Collaborate. This is a team sport. Help others. Ask for help.
- Own your work. Your repos, your commits, your
Signed-off-by. Every contribution has your name on it.
This developer kit uses two complementary protocols:
-
MCP (Model Context Protocol) is a tool-calling protocol. It defines how your agent invokes a specific tool: "call
query_knowledgewith query='digital twins'". MCP handles execution — input, output, streaming. Think of it as the USB cable between devices. -
A2A (Agent-to-Agent Protocol) is a discovery and task-routing protocol. It defines how agents find each other and describe what they can do. Think of it as the business card you exchange before plugging in the cable.
In this developer kit:
- The LPI server publishes an A2A Agent Card (
.well-known/agent.json) describing its 7 tools as skills. - Your agent can also publish an Agent Card describing what it does.
- Execution happens over MCP (subprocess/stdio locally). A2A tells you what exists; MCP lets you use it.
Discovery: A2A Agent Card → "What can you do?"
Execution: MCP Protocol → "Do this specific thing."
An example Python agent is included in examples/agent.py. It demonstrates:
- Connecting to the LPI MCP server via subprocess
- Calling 3 LPI tools to gather context
- Passing results to a local LLM (Ollama) with a provenance-tracking prompt
- Returning an explainable answer that cites which tools provided which information
# Prerequisites: npm run build, ollama serve, ollama pull qwen2.5:1.5b
pip install requests
python examples/agent.py "What is the SMILE methodology and how do I start?"Use this as a starting point for your Level 3 submission. Extend it, replace it, or build something completely different — as long as it uses LPI tools and explains itself.
- MCP Protocol Specification
- A2A Protocol Specification
- Ollama — Run LLMs locally
- SMILE Methodology
- LifeAtlas on GitHub
MIT — build whatever you want on top of this.
Level 3 retry submission v2