AI-powered annotations for localhost development. Create visual feedback on your apps and let AI coding agents automatically implement fixes via MCP integration.
-
Updated
Apr 6, 2026 - JavaScript
AI-powered annotations for localhost development. Create visual feedback on your apps and let AI coding agents automatically implement fixes via MCP integration.
Give your AI coding agent eyes and ears. Screen + voice capture → structured Markdown. MCP server, CLI, and macOS app.
A shared visual canvas for Claude Code — draw, diagram, and get visual feedback through CLI commands and an interactive browser UI.
Drop-in visual feedback widget for websites. Pin comments directly on page elements — Shadow DOM isolated, framework-agnostic.
A powerful, intuitive web application for reviewing static HTML mockups and live websites with visual feedback capabilities.
MCP server that gives AI coding assistants eyes to see visual output from local development environments
A visual feedback and annotation tool for Statamic. Pin feedback to page elements, capture screenshots, leave comments with @mentions, and track everything from the Control Panel. Works on both the frontend and in the CP.
Draw on your live website. Freehand feedback for AI agents.
Visual terminal feedback for AI coding sessions — background colors, tab titles, animated faces, and session identity icons for Claude Code, Gemini CLI, OpenCode, and Codex CLI
VS Code에서 `push / pull / commit` 동작을 감지(또는 확장 커맨드로 실행)해, 오른쪽 패널에서 슬라이드 이펙트로 결과를 보여주는 확장입니다.
Text to Emotion, Emotion to Color: We transform your words into stories painted with beautiful colors.
Vue.js plugin wrapper for Ybug's Feedback Widget
😊 Classification Using Smile Tracker — 🤖 Detect and classify smiles using machine learning & computer vision 🧠 to analyze facial expressions and emotion patterns in real time 🎯.
Self-improving agent refining HTML design via visual feedback.
An EMG-controlled interface for BCI calibration with a gamification mode powered via Brainflow
HRI: Confirmation Methods for Deictic Gestures explores how different robot confirmation strategies—visual, verbal, body movement, or none—impact user experience during object retrieval tasks with the Baxter robot. Developed using ROS for an experimental HRI study on likability, animacy, and trust.
Add a description, image, and links to the visual-feedback topic page so that developers can more easily learn about it.
To associate your repository with the visual-feedback topic, visit your repo's landing page and select "manage topics."