Codex & Claude Code skill for analyzing local AI usage and generating reports or optional Remotion videos.
usage-insights is an installable skill for Codex and Claude Code. When another user installs it, the agent can collect that user's own local AI activity from the current machine and turn it into:
- a written usage report
- a typed data file for reuse
- an optional poster or MP4
The repository is intended for people who want a repeatable workflow for reviewing how they use Codex, Claude, Gemini, and Antigravity across projects and time periods without hand-assembling datasets.
Install the skill from this GitHub subpath:
aldegad/usage-insights/usage-insights
Example prompt after installation:
Use $usage-insights to analyze my local AI usage and write a report.Use $usage-insights to generate my usage report, poster, and video.
Copy the usage-insights/ directory into your Claude Code skills folder:
# Personal (all projects)
cp -r usage-insights ~/.claude/skills/usage-insights
# Project-specific
cp -r usage-insights .claude/skills/usage-insightsExample prompt after installation:
/usage-insightsUse /usage-insights to generate my usage report, poster, and video.
On the machine where the skill is used, the analyzer reads the current user's local data when available:
~/.codex~/.claude~/.gemini/antigravity- local Antigravity app logs
This means another person can install the same skill and generate a report or video from their own local history without editing the analyzer code first.
For the common case, the skill now ships with a one-command runner:
python3 usage-insights/scripts/run_usage_insights.pyThat command will:
- create or reuse
.usage-insights-workspacein the current directory - install dependencies when needed
- generate
INSIGHTS.mdandsrc/data/usage-insights.generated.ts - render both the poster and MP4 by default
If you want a dedicated reusable workspace instead, use the bootstrap flow:
python3 usage-insights/scripts/create_project.py --dest ~/usage-insights-project --install
cd ~/usage-insights-project
npm run analyze
npm run dev
npm run render:poster
npm run render:videoTypical flow:
- Install the skill.
- Ask the agent to use the skill (
$usage-insightsin Codex,/usage-insightsin Claude Code). - Let the skill run
run_usage_insights.pyin the current directory. - Review the generated
INSIGHTS.md, poster, and MP4 outputs. - Use the dedicated workspace flow only when you want a long-lived project to tweak manually.
The generated workspace produces:
INSIGHTS.mdsrc/data/usage-insights.generated.tssrc/data/insights-data.json- optional poster and MP4 exports under
out/
Codex: token totals, session counts, project groupingClaude: token totals when raw local logs are available, plus activity metadataGemini: activity traces and project labelsAntigravity: activity traces from local app logs
Gemini and Antigravity are intentionally kept out of token charts unless reliable token ledgers are available.
usage-insights: installable Codex / Claude Code skillusage-insights/scripts/run_usage_insights.py: one-command runner for report + poster + videousage-insights/scripts/create_project.py: workspace bootstrap scriptusage-insights/assets/remotion-template: analyzer and video templateusage-insights/references: data-source and security notesscripts/generate_example_gif.py: helper for regenerating the example GIF from an MP4
When distributing this repo as a skill:
- share the
usage-insightssubpath, not just the Remotion template - keep private outputs like
INSIGHTS.mdand generated data outside the published skill payload - tell users they still need local provider data on their own machine for meaningful analysis
This repository is safe to publish because it contains generic code, templates, documentation, and sample media.
Generated outputs should still be reviewed before sharing. They may contain:
- project names
- working dates and rhythms
- provider mix and token intensity
- interpretive summaries about habits or workflow
If the final artifact is public-facing, sensitive project names and date ranges should be redacted or generalized first.
MIT

