Track C: Mini AI Feature (Frontend + Backend)
So this is Message Coach - it's a web app that helps you rewrite messages to sound better. The problem i was trying to solve is that sometimes when you write an email or a dm, it comes across way harsher than you meant it to. Or maybe it's unclear and people misunderstand what you're trying to say.
The app takes your original message and:
- Analyzes what tone it has (like "aggressive" or "neutral" or whatever)
- Gives it a clarity score from 0-100
- Finds potential risk flags (things that might be misinterpreted or sound bad)
- Rewrites it in a tone you choose (Professional, Friendly, Concise, or More Polite)
- Explains what changed and why in bullet points
I think it's pretty useful for students, professionals, anyone really. Especially when you're writing something important and want to make sure it sounds right.
Check out the demo video: https://youtu.be/D5mKkuEvuQg
- Frontend: React + Vite + TypeScript
- Backend: Node.js + Express
- LLM: Google Gemini API (using gemini-2.5-flash)
The ai stuff happens in the /api/message endpoint on the backend. Here's the flow:
- User submits message and selects tone
- Backend validates the input (makes sure they actually sent something)
- Builds a prompt that includes:
- The json schema it needs to return
- The original message
- The target tone
- Instructions to analyze tone/clarity/risks, rewrite, and explain
- Calls Gemini API with
gemini-2.5-flashmodel- Uses
responseMimeType: 'application/json'to force json output - Temperature set to 0.7
- Uses
- Parses the json response (gemini wraps it in a weird structure)
- Validates that all required fields are there
- Sends it back to frontend
The response always looks like this:
{
"rewritten": "string",
"analysis": {
"perceivedTone": "string",
"clarityScore": 72,
"riskFlags": ["string", "..."]
},
"explanation": ["string", "..."]
}This schema makes it predictable - frontend always knows what to expect.
- Node.js (v18 or higher)
- npm
- Google Gemini API key (get one from google ai studio)
-
Go to the server folder:
cd server -
Create a
.envfile (or just copy it if there's an example):# create .env file -
Add your api key to
server/.env:GEMINI_API_KEY=your_api_key_here PORT=3000
cd server
npm install
node index.jsYou should see "Server running on http://localhost:3000"
Open a new terminal:
cd client
npm install
npm run devFrontend will start on http://localhost:5173
Then just open http://localhost:5173 in your browser and you're good to go!
- Works best for short-to-medium messages (like a few sentences to a paragraph)
- Only 4 tone options right now (could add more later)
- No saving messages or history (stateless demo)
- Needs internet connection for the api calls
- Costs money based on usage (google charges per token, but it's pretty cheap)
I used Cursor (an ai coding assistant) to help with:
- Setting up the initial project structure
- Figuring out the prompt design (took a lot of iteration to get consistent json)
- Some architecture suggestions
- Debugging when things broke
The hardest part was getting gemini to return consistent json. Sometimes it would wrap it in markdown or add extra text. Had to iterate on the prompt a bunch and add defensive parsing on the backend.
This project taught me a lot about:
- Integrating with ai apis (gemini's structure is different from openai)
- Error handling and validation
- Performance optimization (the scrolling was laggy at first)
- Building a full stack app from scratch
It's not perfect but it works and i'm pretty happy with how it turned out!