Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
fcb449d
level-3: Aryan
iamaryan07 Apr 19, 2026
9962326
level-3: Aryan
iamaryan07 Apr 19, 2026
e2b8a68
level-3: Aryan
iamaryan07 Apr 19, 2026
c4f9602
level-3: Aryan
iamaryan07 Apr 19, 2026
44befd8
level-3: Aryan
iamaryan07 Apr 19, 2026
83a2227
level-3: Aryan
iamaryan07 Apr 19, 2026
c870a44
level-3: Aryan
iamaryan07 Apr 19, 2026
10596e0
level-3: Aryan
iamaryan07 Apr 19, 2026
fa06c56
level-3: Aryan
iamaryan07 Apr 19, 2026
9253aec
level-3: Aryan
iamaryan07 Apr 19, 2026
5273ba9
level-4: Aryan
iamaryan07 Apr 19, 2026
fd4ca1f
level-4: Aryan
iamaryan07 Apr 19, 2026
44c7d47
level-4: Aryan
iamaryan07 Apr 19, 2026
90ddb00
level-4: Aryan
iamaryan07 Apr 19, 2026
da4d222
level-4: Aryan
iamaryan07 Apr 19, 2026
e2c08ba
level-4: Aryan
iamaryan07 Apr 19, 2026
4c058cf
level-4: Aryan
iamaryan07 Apr 19, 2026
c6623d9
level-4: Aryan
iamaryan07 Apr 20, 2026
8bba5f4
level-4: Aryan
iamaryan07 Apr 20, 2026
5c6e6d1
level-4: Aryan
iamaryan07 Apr 20, 2026
915934e
level-4: Aryan
iamaryan07 Apr 20, 2026
f323be8
level-4: Aryan
iamaryan07 Apr 20, 2026
f9e71a8
level-4: Aryan
iamaryan07 Apr 20, 2026
661117e
level-4: Aryan
iamaryan07 Apr 20, 2026
1775f35
level-4: Aryan
iamaryan07 Apr 20, 2026
3793cc2
level-4: Aryan
iamaryan07 Apr 20, 2026
86895c6
level-4: Aryan
iamaryan07 Apr 20, 2026
759f369
level-4: Aryan
iamaryan07 Apr 20, 2026
e3decc2
level-4: Aryan
iamaryan07 Apr 20, 2026
398ce6f
level-4: Aryan
iamaryan07 Apr 20, 2026
7b77347
level-4: Aryan
iamaryan07 Apr 20, 2026
76ab42e
level-4: Aryan
iamaryan07 Apr 20, 2026
bdca0cb
level-4: Aryan
iamaryan07 Apr 20, 2026
6f70675
level-4: Aryan
iamaryan07 Apr 20, 2026
a14684b
level-4: Aryan
iamaryan07 Apr 20, 2026
e48a22d
level-2: Aryan
iamaryan07 Apr 21, 2026
6703a51
level-3: Aryan
iamaryan07 Apr 21, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
240 changes: 240 additions & 0 deletions submissions/Aryan/HOW_I_DID_IT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
# How I Built the LPI Life Agent

## Step-by-Step Process

### Phase 1: Understanding the LPI System

I started by exploring how the LPI (Life Programmable Interface) works. The initial example showed how to connect to tools, but it wasn’t clear how real tool execution happens. I realized that:

* The system uses JSON-RPC communication
* Tools are exposed via a Node.js server
* Proper initialization (`notifications/initialized`) is required before calling tools

This phase was mostly about understanding the protocol rather than writing code.

---

### Phase 2: Defining the Use Case

Instead of building a generic agent, I focused on a specific query:

> β€œHow are digital twins used in healthcare?”

This helped me design the agent around:

* Conceptual understanding (methodology)
* Real-world application (case studies)

---

### Phase 3: Tool Selection Strategy

Rather than using many tools, I intentionally selected two:

* `smile_overview` β†’ provides structured methodology
* `get_case_studies` β†’ provides real-world implementations

The idea was:

> Combine theory + application to produce a meaningful answer

---

### Phase 4: Fixing Tool Execution

Initially, I used `test-client.js`, which only runs demo tests.

The key fix was:

* Switching to `dist/src/index.js` (actual server)
* Adding initialization message:

```json
{"jsonrpc": "2.0", "method": "notifications/initialized"}
```

Without this, tool calls returned empty results.

---

### Phase 5: Parsing Tool Output

The biggest challenge was handling tool responses.

The output format was nested:

```json
result β†’ content β†’ text
```

Instead of treating it as plain text, I extracted:

```python
content[0]["text"]
```

This allowed me to access actual usable data.

---

### Phase 6: Improving Relevance

The `get_case_studies` tool returned multiple industries.

Problem:

* The first case study was often unrelated (e.g., smart buildings)

Solution:

* Modified tool arguments:

```python
{"query": "healthcare digital twin"}
```
* Extracted only the healthcare section from the response

This ensured the answer actually matched the user query.

---

### Phase 7: Structuring the Output

Instead of dumping raw text, I structured the response into:

* SMILE Framework (Summary)
* Case Study (Summary)
* Analysis
* Conclusion

This made the agent:

* easier to read
* more explainable
* aligned with real-world reasoning

---

## Problems I Faced

### 1. Wrong Execution Path

Using `test-client.js` resulted in logs instead of real data

Fix:

* Switched to actual LPI server (`dist/src/index.js`)

---

### 2. Missing Initialization

Without sending the initialization message, tool calls silently failed.

Fix:

* Added JSON-RPC initialization before requests

---

### 3. Empty or Broken Output

Initially, outputs were empty or incomplete.

Cause:

* Using `readline()` instead of full output read

Fix:

* Switched to `process.communicate()`

---

### 4. Irrelevant Case Studies

Tool returned multiple industries.

Fix:

* Filtered for healthcare-specific content

---

### 5. Poor Summarization

Splitting by sentences broke headings like `# S.M.I.L.E.`

Fix:

* Switched to simple truncation (`text[:400]`)

---

## How I Solved Them

* Read and understood JSON-RPC communication instead of guessing
* Used proper server instead of test client
* Implemented structured parsing for nested responses
* Added domain-specific filtering for relevance
* Simplified summarization instead of overengineering

---

## What I Learned

### Tool Integration Matters More Than Models

The challenge wasn’t AIβ€”it was correctly connecting and using tools.

---

### More Data β‰  Better Output

Raw tool output was too large and noisy. Filtering made answers significantly better.

---

### Explainability Improves Quality

Structuring output into sections made the agent more understandable and useful.

---

### Debugging Is the Real Work

Most time was spent fixing:

* paths
* protocol issues
* parsing

Not writing logic.

---

### Simplicity Wins

The final agent is simple:

* 2 tools
* basic parsing
* structured output

But it works reliably.

---

## Final Thoughts

This project was less about building a complex AI system and more about:

* understanding how tools communicate
* extracting meaningful information
* presenting it clearly

The biggest takeaway was that a good agent is not defined by complexity, but by:

> how effectively it connects, filters, and explains information.

---
64 changes: 64 additions & 0 deletions submissions/Aryan/level2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Level 2 Submission β€” Aryan

## LPI Sandbox Setup

All tools executed successfully, confirming that the LPI sandbox is functioning correctly. The test client demonstrated a modular architecture where each tool exposes a specific capability. Instead of relying on a single LLM response, the system operates through structured tool calls, making the agent behavior more transparent and controllable.

---

## Test Client Output

=== LPI Sandbox Test Client ===

[LPI Sandbox] Server started β€” 7 read-only tools available
Connected to LPI Sandbox

Available tools (7):

- smile_overview
- smile_phase_detail
- query_knowledge
- get_case_studies
- get_insights
- list_topics
- get_methodology_step

[PASS] smile_overview({})
[PASS] smile_phase_detail({"phase":"reality-emulation"})
[PASS] list_topics({})
[PASS] query_knowledge({"query":"explainable AI"})
[PASS] get_case_studies({})
[PASS] get_case_studies({"query":"smart buildings"})
[PASS] get_insights({"scenario":"personal health digital twin","tier":"free"})
[PASS] get_methodology_step({"phase":"concurrent-engineering"})

=== Results ===
Passed: 8/8
Failed: 0/8

All tools are operational. The LPI Sandbox is ready for agent development.

---

## Local LLM Setup (Ollama)

**Model:**
qwen2.5:1.5b

**Prompt:**
What is SMILE methodology?

**Response (summary):**
The model described SMILE as a structured approach to managing the data lifecycle (creation, storage, access, deletion), emphasizing automation and governance. However, this interpretation is not grounded in a recognized or standardized framework. The response reflects generic data management concepts rather than a formally defined methodology.

---

## Observation

Running the model locally provides control over execution factors such as latency, hardware usage, and reproducibility. However, it does not expose internal reasoningβ€”only observable outputs. This reinforces the need for external grounding (e.g., tools) rather than relying solely on model-generated explanations.

---

## Reflection on SMILE Methodology

The model’s response aligns with general principles of data lifecycle management and system design. However, attributing these ideas to a defined β€œSMILE methodology” is unsupported. It is more accurate to interpret this as a generic abstraction rather than a validated framework.
Loading
Loading