diff --git a/.github/workflows/validate.yml b/.github/workflows/validate.yml new file mode 100644 index 0000000..ff8c040 --- /dev/null +++ b/.github/workflows/validate.yml @@ -0,0 +1,23 @@ +name: Validate Plots + +on: + push: + pull_request: + +jobs: + validate: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-python@v5 + with: + python-version: "3.11" + - run: python -m pip install --upgrade pip + - run: python -m pip install -r requirements.txt + - run: python build_all.py + - run: python scripts/generate_homepage.py + - run: python scripts/generate_readme_links.py + - run: python scripts/generate_sitemap.py + - run: python scripts/validate_all.py + - run: python scripts/check_links.py + - run: python scripts/check_accessibility_static.py diff --git a/README.md b/README.md index bfd68f8..f062b86 100644 --- a/README.md +++ b/README.md @@ -1,169 +1,198 @@ -# Plots +# Exponential Progress Atlas -Interactive timelines of exponential tech progress – showing growth, compression, and scaling laws enabling modern AI. +Interactive timelines showing how compute, energy, coordination, memory, and adoption compound into civilizational acceleration. -## Quick Links +The root inventory is [`plots_manifest.json`](plots_manifest.json). Homepage cards, README links, build ordering, dashboard lanes, and validation all read from that manifest. -- [AI Compute Timeline (Interactive)](ai-compute-timeline/output/ai_compute_timeline_interactive.html) -- [Adoption Timeline (Interactive)](adoption-timeline/output/adoption_timeline_interactive.html) -- [Energetic Scaling (Interactive)](energetic-scaling/output/energetic_scaling_interactive.html) -- [Civilization Scaling (Interactive)](civilization-scaling/output/civilization_scaling_interactive.html) -- [Energy Leverage (Interactive)](energy-leverage-per-person/output/energy_leverage_interactive.html) +## Published Inventory (9) + +- [AI Compute Timeline](ai-compute-timeline/output/ai_compute_timeline_interactive.html) +- [Adoption Timeline](adoption-timeline/output/adoption_timeline_interactive.html) +- [Energetic Scaling](energetic-scaling/output/energetic_scaling_interactive.html) +- [Civilization Scaling](civilization-scaling/output/civilization_scaling_interactive.html) +- [Energy Leverage](energy-leverage-per-person/output/energy_leverage_interactive.html) +- [Model Sizes](model-sizes/output/model_sizes_interactive.html) +- [AI Benchmark Progress](ai-benchmark-progress/output/benchmark_progress_interactive.html) +- [Cost to Train](cost-to-train/output/cost_to_train_interactive.html) +- [Unified Dashboard](dashboard/index.html) --- -## 1. AI Compute Timeline (1900–2026) +## 1. AI Compute Timeline + +Training compute from early electronic computing to frontier AI, with proxies and speculative projections labeled separately. -Training FLOPs milestones for AI history – from vacuum tubes to frontier models. +Hero stat: **10^27+ FLOPs**. Data confidence: **mixed**. -- **Interactive**: [View Full Interactive Version (Plotly – Hover/Zoom)](ai-compute-timeline/output/ai_compute_timeline_interactive.html) +- **Interactive**: [AI Compute Timeline](ai-compute-timeline/output/ai_compute_timeline_interactive.html) - **Static**: [PNG](ai-compute-timeline/output/ai_compute_timeline_highres.png) | [SVG](ai-compute-timeline/output/ai_compute_timeline.svg) +- **Data**: [ai-compute-timeline/data/ai_milestones.csv](ai-compute-timeline/data/ai_milestones.csv) +- **Metadata**: [ai-compute-timeline/data/meta.json](ai-compute-timeline/data/meta.json) - **Details**: [ai-compute-timeline/](ai-compute-timeline/) -![AI Compute Timeline](ai-compute-timeline/output/ai_compute_timeline_highres.png) - -*Zoom recommended for 2010+ frontier cluster (10²⁴–10²⁷+ range).* - --- -## 2. Accelerating Paradigms in Computing & Connectivity (1957–2030+) +## 2. Adoption Timeline -Time to ~50M users adoption – shows exponential compression from years to days. +Time-to-scale proxies across computing, connectivity, mobile, cloud, and AI paradigms. -- **Interactive**: [View Full Interactive Version (Plotly – Hover/Zoom)](adoption-timeline/output/adoption_timeline_interactive.html) +Hero stat: **60x faster**. Data confidence: **mixed**. + +- **Interactive**: [Adoption Timeline](adoption-timeline/output/adoption_timeline_interactive.html) - **Static**: [PNG](adoption-timeline/output/adoption_timeline_highres.png) | [SVG](adoption-timeline/output/adoption_timeline.svg) +- **Data**: [adoption-timeline/data/tech_adoption.csv](adoption-timeline/data/tech_adoption.csv) +- **Metadata**: [adoption-timeline/data/meta.json](adoption-timeline/data/meta.json) - **Details**: [adoption-timeline/](adoption-timeline/) -![Adoption Timeline](adoption-timeline/output/adoption_timeline_highres.png) - -*From ~10 years (1957) to ~60 days (2022) – a 60x+ acceleration, with AI pushing toward near-instant scaling.* - --- -## 3. Energetic Scaling: Biology vs. Technology +## 3. Energetic Scaling -Neural efficiency vs. body size (Biology) and compute efficiency vs. time (Tech) – both reveal power laws. +Biology, hardware efficiency, AI training compute, and foraging energetics compared with clean source datasets. -- **Interactive**: [View Full Interactive Version (Plotly – Hover/Zoom)](energetic-scaling/output/energetic_scaling_interactive.html) +Hero stat: **10^6x+ efficiency**. Data confidence: **mixed**. + +- **Interactive**: [Energetic Scaling](energetic-scaling/output/energetic_scaling_interactive.html) - **Static**: [PNG](energetic-scaling/output/energetic_scaling_highres.png) | [SVG](energetic-scaling/output/energetic_scaling.svg) +- **Data**: [energetic-scaling/data/scaling_data.csv](energetic-scaling/data/scaling_data.csv) +- **Metadata**: [energetic-scaling/data/meta.json](energetic-scaling/data/meta.json) - **Details**: [energetic-scaling/](energetic-scaling/) -![Energetic Scaling](energetic-scaling/output/energetic_scaling_highres.png) - -*Humans are the biological outlier (EQ~7); AI is the technological outlier (75 quadrillion-fold compute/$ since 1939).* - --- -## 4. Scaling Civilization: Energy, Coordination, Memory, Replication +## 4. Civilization Scaling -Multi-lane log-time timeline showing how four fundamental metrics have scaled from 1M years ago to 2030+. +Five civilizational lanes: energy, coordination, memory, replication, and latency over log-time. -- **Interactive**: [View Full Interactive Version (Plotly – Hover/Zoom)](civilization-scaling/output/civilization_scaling_interactive.html) +Hero stat: **5 lanes**. Data confidence: **mixed**. + +- **Interactive**: [Civilization Scaling](civilization-scaling/output/civilization_scaling_interactive.html) - **Static**: [PNG](civilization-scaling/output/civilization_scaling_highres.png) | [SVG](civilization-scaling/output/civilization_scaling.svg) +- **Data**: [civilization-scaling/data/civilization_metrics.csv](civilization-scaling/data/civilization_metrics.csv) +- **Metadata**: [civilization-scaling/data/meta.json](civilization-scaling/data/meta.json) - **Details**: [civilization-scaling/](civilization-scaling/) -![Civilization Scaling](civilization-scaling/output/civilization_scaling_highres.png) - -*Log-time compresses ~99% of human existence (pre-writing) into the left side, expanding modern acceleration on the right. Phase flips (Fire → Agriculture → Writing → Printing → Internet → AI) stack to enable exponential progress.* - --- -## 5. Energy Leverage per Person (NEW) +## 5. Energy Leverage -How much total energy does an average human command compared to the metabolic baseline (~114 W)? +Per-person energy command relative to the metabolic baseline, with period anchors labeled explicitly. -- **Interactive**: [View Full Interactive Version (Plotly – Hover/Zoom)](energy-leverage-per-person/output/energy_leverage_interactive.html) +Hero stat: **17x body energy**. Data confidence: **high**. + +- **Interactive**: [Energy Leverage](energy-leverage-per-person/output/energy_leverage_interactive.html) - **Static**: [PNG](energy-leverage-per-person/output/energy_leverage_highres.png) | [SVG](energy-leverage-per-person/output/energy_leverage.svg) +- **Data**: [energy-leverage-per-person/data/energy_leverage_datapoints.csv](energy-leverage-per-person/data/energy_leverage_datapoints.csv) +- **Metadata**: [energy-leverage-per-person/data/meta.json](energy-leverage-per-person/data/meta.json) - **Details**: [energy-leverage-per-person/](energy-leverage-per-person/) -![Energy Leverage](energy-leverage-per-person/output/energy_leverage_highres.png) +--- + +## 6. Model Sizes -*Humans went from ~1–2× body energy (foragers) to ~17× body energy (modern). The post-1750 coal/steam and post-1950 oil/electricity jumps dominate the visual story.* +Language model parameter counts over time, separating disclosed counts from estimates and unreleased projections. + +Hero stat: **1.5B -> 5T params**. Data confidence: **speculative**. + +- **Interactive**: [Model Sizes](model-sizes/output/model_sizes_interactive.html) +- **Static**: [PNG](model-sizes/output/model_sizes_highres.png) | [SVG](model-sizes/output/model_sizes.svg) +- **Data**: [model-sizes/data/llm_model_sizes.csv](model-sizes/data/llm_model_sizes.csv) +- **Metadata**: [model-sizes/data/meta.json](model-sizes/data/meta.json) +- **Details**: [model-sizes/](model-sizes/) --- -## Why These Plots? +## 7. AI Benchmark Progress -| Timeline | Shows | Trend | -|----------|-------|-------| -| **Compute** | Training FLOPs (10⁰ → 10²⁷) | Exponential **growth** ↑ | -| **Adoption** | Time to 50M users | Exponential **compression** ↓ | -| **Energetic** | Neurons/kg & cps/$ | **Power laws** (log-log linear) | -| **Civilization** | Energy/Coord/Memory/Repl | **Stacking infrastructure** layers | -| **Energy Leverage** | Watts/person vs metabolic | **~17× body energy** (modern) | +Benchmark progress against human baselines across knowledge, coding, software engineering, and reasoning tasks. -Together they illustrate: -- **Compute**: Raw exponential growth enabling AI scale -- **Adoption**: Ecosystem acceleration compressing timelines -- **Energetic**: Fundamental scaling rules – humans and AI as outliers -- **Civilization**: How infrastructure layers (fire → writing → internet → AI) enable each successive leap -- **Energy Leverage**: Per-person energy command from foragers (~2×) to modern (~17×) +Hero stat: **4 benchmark lanes**. Data confidence: **mixed**. -Inspired by Kurzweil, [Epoch AI](https://epochai.org/), [Statista](https://www.statista.com/), [Asymco](http://www.asymco.com/), Herculano-Houzel (neuronal scaling), Kleiber (metabolic 0.75), Kaplan/Charnov (LHT/OFT). +- **Interactive**: [AI Benchmark Progress](ai-benchmark-progress/output/benchmark_progress_interactive.html) +- **Static**: [PNG](ai-benchmark-progress/output/benchmark_progress_highres.png) | [SVG](ai-benchmark-progress/output/benchmark_progress.svg) +- **Data**: [ai-benchmark-progress/data/benchmark_data.csv](ai-benchmark-progress/data/benchmark_data.csv) +- **Metadata**: [ai-benchmark-progress/data/meta.json](ai-benchmark-progress/data/meta.json) +- **Details**: [ai-benchmark-progress/](ai-benchmark-progress/) --- -## Repository Structure +## 8. Cost to Train -Each plot follows a standardized structure: +Training cost, FLOPs, and capability over time, showing the efficiency paradox at the frontier. -``` -/ -├── index.html # Interactive page (uses shared/site.css) -├── data/ -│ ├── .csv # Source data -│ └── meta.json # Metadata: title, description, fields, sources -├── output/ -│ ├── *_interactive.html # Plotly interactive chart -│ ├── *_highres.png # High-res PNG export -│ └── *.svg # SVG vector export -├── src/ -│ ├── *.py # Matplotlib static generator -│ └── *_plotly.py # Plotly interactive generator -└── README.md -``` - -### Shared Assets +Hero stat: **$/FLOP collapse**. Data confidence: **mixed**. -- `shared/site.css` – Common styles for all pages -- `shared/site.js` – Navigation bar injection -- `scripts/validate_all.py` – Validate all plots (run: `python scripts/validate_all.py`) +- **Interactive**: [Cost to Train](cost-to-train/output/cost_to_train_interactive.html) +- **Static**: [PNG](cost-to-train/output/cost_to_train_highres.png) | [SVG](cost-to-train/output/cost_to_train.svg) +- **Data**: [cost-to-train/data/training_costs.csv](cost-to-train/data/training_costs.csv) +- **Metadata**: [cost-to-train/data/meta.json](cost-to-train/data/meta.json) +- **Details**: [cost-to-train/](cost-to-train/) --- -## Development +## 9. Unified Dashboard -### Install dependencies +A synchronized overview of the atlas inventory using the same manifest as the homepage, README, build, and validator. -```bash -pip install -r requirements.txt -``` +Hero stat: **9 atlas entries**. Data confidence: **mixed**. -### Regenerate all plots +- **Interactive**: [Unified Dashboard](dashboard/index.html) +- **Data**: [plots_manifest.json](plots_manifest.json) +- **Metadata**: [plots_manifest.json](plots_manifest.json) +- **Details**: [dashboard/](dashboard/) -```bash -python build_all.py -``` -### Regenerate a specific plot +--- + +## Data Contracts -```bash -cd /src -python .py # Static PNG/SVG -python _plotly.py # Interactive HTML +- `ai-compute-timeline/data/ai_milestones.csv` uses normalized fields: `year,event,category,value_numeric,value_low,value_high,value_unit,estimate_status,source_id,confidence,display_label,notes`. +- `adoption-timeline/data/tech_adoption.csv` includes `adoption_metric_type`, `comparability_level`, `source_id`, `confidence`, and notes so unlike adoption proxies are not treated as perfectly comparable. +- Energetic Scaling keeps comparison-level data in `scaling_data.csv` and splits clean source contracts into `biology_neural_scaling.csv`, `hardware_efficiency.csv`, `ai_training_flops.csv`, and `foraging_lht.csv`. + +## Repository Structure + +Each plot should follow this structure: + +```text +/ +├── index.html +├── data/ +│ ├── .csv +│ └── meta.json +├── output/ +│ ├── *_interactive.html +│ ├── *_highres.png +│ └── *.svg +├── src/ +│ ├── *.py +│ └── *_plotly.py +└── README.md ``` -### Validate all plots +## Development ```bash +python -m pip install -r requirements.txt +python build_all.py +python scripts/generate_homepage.py +python scripts/generate_readme_links.py python scripts/validate_all.py +python scripts/check_links.py +python scripts/check_accessibility_static.py ``` ---- +## Adding a New Plot + +1. Create the standard plot directory structure. +2. Add data, metadata, generator scripts, output paths, and README. +3. Add the entry to `plots_manifest.json` with `status: "draft"` until outputs and sources pass validation. +4. Run the build, generators, validators, link checker, and accessibility checker. +5. Change `status` to `"published"` only when the plot should appear on the homepage and dashboard. -## Contributing +## Deployment -Ideas, new milestones, or bug reports welcome! See [CONTRIBUTING.md](CONTRIBUTING.md). +GitHub Pages deploys should run the same validation commands in CI before publishing. A failed build, broken relative link, missing alt text, stale output, or manifest mismatch should block deployment. ## License diff --git a/adoption-timeline/data/meta.json b/adoption-timeline/data/meta.json index bce71f3..7f75dd2 100644 --- a/adoption-timeline/data/meta.json +++ b/adoption-timeline/data/meta.json @@ -1,34 +1,51 @@ { "title": "Adoption Timeline", - "description": "Time to ~50M users adoption, showing exponential compression from years to days (1957-2026).", + "description": "Time-to-scale proxies across technology paradigms. The chart labels metric type and comparability so FORTRAN, ARPANET, iPhone, ChatGPT, and agentic tools are not treated as perfectly equivalent.", "fields": { - "Year": "Launch year of technology/product", + "Year": "Launch year of technology or product", "Event": "Name of technology or product", - "Category": "Tech category (Hardware, Software, Internet/Web, Mobile, Social/Apps, Cloud, AI/Agentic)", - "Days_to_Adoption": "Days to reach ~50M users or equivalent adoption milestone", - "Impact": "Historical impact level (Transformative, High, Medium, Speculative)" + "Category": "Technology category", + "Days_to_Adoption": "Days to reach the stated proxy scale", + "Impact": "Historical impact level", + "adoption_metric_type": "Metric type: users, developers, devices, accounts, installs, or organizations", + "comparability_level": "strict_proxy, comparable_proxy, rough_analogy, or projection", + "source_id": "Source identifier for auditable rows", + "confidence": "Qualitative confidence", + "comparability_notes": "Why the proxy is or is not comparable with other rows", + "notes": "Additional audit notes" }, "sources": [ { + "id": "statista", "name": "Statista", "url": "https://www.statista.com/", "accessed": "2026-01", "notes": "User growth and adoption data" }, { + "id": "asymco", "name": "Asymco", "url": "http://www.asymco.com/", "accessed": "2026-01", "notes": "Historical technology adoption curves" }, { + "id": "epoch", "name": "Epoch AI", "url": "https://epochai.org/", "accessed": "2026-01", "notes": "AI adoption metrics" + }, + { + "id": "source_review_needed", + "name": "Source review needed", + "url": "https://github.com/mschwar/plots", + "accessed": "2026-04", + "notes": "Projection retained for chart continuity but flagged for source review" } ], - "transformations": "Log10(days) for y-axis. Midpoints used for date ranges. Pre-1990 estimates approximate.", + "transformations": "Log10(days) for y-axis. Metric types and comparability levels control marker shape. Trend line is a visual guide, not a causal model.", "created": "2026-01", + "last_updated": "2026-04-24", "author": "mschwar" } diff --git a/adoption-timeline/data/tech_adoption.csv b/adoption-timeline/data/tech_adoption.csv index 00d72db..a9ac4c5 100644 --- a/adoption-timeline/data/tech_adoption.csv +++ b/adoption-timeline/data/tech_adoption.csv @@ -1,25 +1,25 @@ -Year,Event,Category,Days_to_Adoption,Impact -1957,FORTRAN Compiler (IBM),Software/Compiler,3650,High -1969,ARPANET launch,Internet/Web,3650,High -1971,Intel 4004 Microprocessor,Hardware,1825,Medium -1975,Microsoft BASIC / Altair 8800,Software/Compiler,1825,Medium -1981,IBM PC release,Hardware,1460,High -1984,Apple Macintosh (GUI),Hardware,1825,High -1989,World Wide Web proposed (Berners-Lee),Internet/Web,1460,Transformative -1995,Windows 95 + Netscape boom,Internet/Web,730,High -1998,Google Search public,Internet/Web,1095,High -2004,Facebook launch,Social/Apps,365,Transformative -2006,AWS public cloud launch,Cloud/Infrastructure,1095,Transformative -2007,iPhone (1st gen) + App Store (2008),Mobile,730,Transformative -2008,Android Market / App ecosystem,Mobile,365,High -2010,Instagram launch,Social/Apps,270,High -2012,Hadoop / Big Data maturity in cloud,Cloud/Infrastructure,730,Medium -2016,TikTok global launch,Social/Apps,180,High -2022,ChatGPT public launch,AI/Agentic,60,Transformative -2023,Grok (xAI) public on X,AI/Agentic,180,High -2024,o1 / Claude 3.5 / Gemini 1.5 reasoning models,AI/Agentic,90,High -2025,Agentic AI tools scale (Devin-like),AI/Agentic,21,Transformative -2026,GPT-5.4 (OpenAI) public release,AI/Agentic,21,Transformative -2026,Gemini 3.1 Pro public preview (Google),AI/Agentic,30,High -2026,Claude Opus 4.7 (Anthropic) — released Apr 16 2026,AI/Agentic,21,High -2026,Projected omni-modal / recursive agents,AI/Agentic,14,Speculative Transformative +Year,Event,Category,Days_to_Adoption,Impact,adoption_metric_type,comparability_level,source_id,confidence,comparability_notes,notes +1957,FORTRAN Compiler (IBM),Software/Compiler,3650,High,developers,rough_analogy,asymco,low,Developer adoption is not equivalent to consumer users,Proxy for early software ecosystem scale. +1969,ARPANET launch,Internet/Web,3650,High,organizations,rough_analogy,asymco,low,Research-network node growth is not equivalent to consumer users,Proxy for institutional network adoption. +1971,Intel 4004 Microprocessor,Hardware,1825,Medium,devices,rough_analogy,asymco,low,Chip ecosystem shipment scale is not equivalent to users,Proxy for hardware diffusion. +1975,Microsoft BASIC / Altair 8800,Software/Compiler,1825,Medium,developers,rough_analogy,asymco,low,Hobbyist/developer adoption differs from users,Proxy for personal computing developer ecosystem. +1981,IBM PC release,Hardware,1460,High,devices,rough_analogy,asymco,medium,Device sales are not user accounts,Proxy for PC-era adoption. +1984,Apple Macintosh (GUI),Hardware,1825,High,devices,rough_analogy,asymco,medium,Device sales are not user accounts,Proxy for GUI adoption. +1989,World Wide Web proposed (Berners-Lee),Internet/Web,1460,Transformative,users,rough_analogy,statista,medium,Early web adoption estimates are approximate,Proxy for web user scale. +1995,Windows 95 + Netscape boom,Internet/Web,730,High,users,comparable_proxy,statista,medium,Consumer software/web use closer to user-count metric,Approximate mass-market scale. +1998,Google Search public,Internet/Web,1095,High,users,comparable_proxy,statista,medium,Search users approximate consumer reach,Approximate user scale. +2004,Facebook launch,Social/Apps,365,Transformative,accounts,comparable_proxy,statista,medium,Accounts are closer to users but can include duplicates,Social account growth proxy. +2006,AWS public cloud launch,Cloud/Infrastructure,1095,Transformative,organizations,rough_analogy,asymco,medium,Cloud organization adoption differs from consumer users,Infrastructure adoption proxy. +2007,iPhone (1st gen) + App Store (2008),Mobile,730,Transformative,devices,comparable_proxy,statista,medium,Device sales approximate users but not identical,Mobile ecosystem scale proxy. +2008,Android Market / App ecosystem,Mobile,365,High,devices,comparable_proxy,statista,medium,Install base and accounts differ,Mobile ecosystem scale proxy. +2010,Instagram launch,Social/Apps,270,High,accounts,comparable_proxy,statista,medium,Accounts approximate users but can duplicate,Social app account scale. +2012,Hadoop / Big Data maturity in cloud,Cloud/Infrastructure,730,Medium,organizations,rough_analogy,asymco,low,Enterprise adoption not consumer user adoption,Proxy for data infrastructure adoption. +2016,TikTok global launch,Social/Apps,180,High,accounts,comparable_proxy,statista,medium,Accounts approximate users but can duplicate,Consumer app scale. +2022,ChatGPT public launch,AI/Agentic,60,Transformative,users,strict_proxy,epoch,high,Public user-count reporting is closest to the metric,Consumer AI adoption proxy. +2023,Grok (xAI) public on X,AI/Agentic,180,High,accounts,comparable_proxy,epoch,low,Platform exposure differs from direct active users,AI product distribution proxy. +2024,o1 / Claude 3.5 / Gemini 1.5 reasoning models,AI/Agentic,90,High,accounts,rough_analogy,epoch,low,Multiple products and access models combined,Frontier AI adoption proxy. +2025,Agentic AI tools scale (Devin-like),AI/Agentic,21,Transformative,organizations,projection,source_review_needed,low,Projected organization/tool adoption is not consumer user adoption,Projection pending source review. +2026,GPT-5.4 (OpenAI) public release,AI/Agentic,21,Transformative,users,projection,source_review_needed,speculative,Projected future user adoption,Speculative projection. +2026,Gemini 3.1 Pro public preview (Google),AI/Agentic,30,High,users,projection,source_review_needed,speculative,Projected future preview adoption,Speculative projection. +2026,Claude Opus 4.7 (Anthropic) released Apr 16 2026,AI/Agentic,21,High,users,projection,source_review_needed,speculative,Projected future user adoption,Speculative projection. +2026,Projected omni-modal / recursive agents,AI/Agentic,14,Speculative Transformative,users,projection,source_review_needed,speculative,Speculative future category,Speculative projection. diff --git a/adoption-timeline/index.html b/adoption-timeline/index.html index 8fe2db1..29e9904 100644 --- a/adoption-timeline/index.html +++ b/adoption-timeline/index.html @@ -3,7 +3,7 @@ - Adoption Timeline (1957–2030+) – Time to Mass Adoption + Adoption Timeline (1957–2030+) – Time-to-Scale Proxy @@ -12,9 +12,9 @@

Adoption Timeline

-

Time to ~50M Users (1957–2030+)

+

Time-to-scale proxies across users, developers, devices, accounts, and organizations (1957–2030+)

- + Adoption Timeline @@ -45,12 +45,15 @@

Key Milestones

About

-

Semi-log timeline showing exponential compression in tech adoption. Log scale turns accelerating (shortening) adoption into a visible downward trend.

+

Semi-log timeline showing compression in adoption proxies. The plot is intentionally framed as a proxy comparison because the metric type changes across eras.

    -
  • 21 milestones from 1957 to 2026
  • -
  • Color by category (Hardware, Web, Mobile, AI)
  • +
  • 24 milestones from 1957 to 2026
  • +
  • Color by category and marker shape by comparability
  • Dashed trend line is visual guide, not causal model
+ +

How to Read This Chart

+

Circle markers are closer user/account proxies. Triangle and square markers are rough analogies or projections. Use hover text to inspect metric type and comparability notes.

diff --git a/adoption-timeline/output/adoption_timeline.png b/adoption-timeline/output/adoption_timeline.png index e422df7..07ea0ad 100644 Binary files a/adoption-timeline/output/adoption_timeline.png and b/adoption-timeline/output/adoption_timeline.png differ diff --git a/adoption-timeline/output/adoption_timeline.svg b/adoption-timeline/output/adoption_timeline.svg index 08cdcb7..c8edf5d 100644 --- a/adoption-timeline/output/adoption_timeline.svg +++ b/adoption-timeline/output/adoption_timeline.svg @@ -1,16 +1,16 @@ - + - 2026-04-24T11:58:34.269700 + 2026-04-24T12:52:04.820548 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -21,204 +21,204 @@ - - - +" clip-path="url(#p6c9f736f80)" style="fill: #f0f0f0; opacity: 0.4; stroke: #f0f0f0; stroke-linejoin: miter"/> - +" clip-path="url(#p6c9f736f80)" style="fill: #e8f4fd; opacity: 0.4; stroke: #e8f4fd; stroke-linejoin: miter"/> - +" clip-path="url(#p6c9f736f80)" style="fill: #f3e8fd; opacity: 0.4; stroke: #f3e8fd; stroke-linejoin: miter"/> - +" clip-path="url(#p6c9f736f80)" style="fill: #fde8e8; opacity: 0.4; stroke: #fde8e8; stroke-linejoin: miter"/> - + - + - + - - + - + - + - + - + - + - + - + - + - + - + @@ -441,18 +441,18 @@ L 367.443166 32.837812 - + - + - + - + - + - + @@ -509,18 +509,18 @@ L 536.450958 32.837812 - + - + - + @@ -530,7 +530,7 @@ L 620.954854 32.837812 - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -896,18 +896,18 @@ L 705.45875 119.033121 - + - + - + @@ -919,137 +919,137 @@ L 705.45875 50.699723 - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - - - + - + - - @@ -1286,169 +1274,147 @@ z - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - - - - - - - + + - - - + + - - - + + - - - + + - - + + + + + - - + + - - - + + - - - + + - - + + - - - + + - - - + + - - + + - - - + + - - - + + - - - + + - - - + + - - - + + - - - + + - - + + + + + - - + + + + + - - - + + - - - + + - - + + - - - + + - - + - - - + - - + - - + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + + diff --git a/adoption-timeline/output/adoption_timeline_highres.png b/adoption-timeline/output/adoption_timeline_highres.png index 794db64..1b7c89a 100644 Binary files a/adoption-timeline/output/adoption_timeline_highres.png and b/adoption-timeline/output/adoption_timeline_highres.png differ diff --git a/adoption-timeline/output/adoption_timeline_interactive.html b/adoption-timeline/output/adoption_timeline_interactive.html index 51e4093..caa284d 100644 --- a/adoption-timeline/output/adoption_timeline_interactive.html +++ b/adoption-timeline/output/adoption_timeline_interactive.html @@ -1,7 +1,7 @@ -
-
+
+
\ No newline at end of file diff --git a/adoption-timeline/src/adoption_timeline.py b/adoption-timeline/src/adoption_timeline.py index cf4dbe1..44cf085 100644 --- a/adoption-timeline/src/adoption_timeline.py +++ b/adoption-timeline/src/adoption_timeline.py @@ -80,7 +80,10 @@ def create_chart(df): for _, row in cat_df.iterrows(): size = IMPACT_SIZES.get(row['Impact'], 50) - marker = 'd' if 'Speculative' in str(row['Impact']) or row['Year'] >= 2026 else 'o' + comparability = str(row.get('comparability_level', 'comparable_proxy')) + marker = '^' if comparability == 'rough_analogy' else ('s' if comparability == 'projection' else 'o') + if 'Speculative' in str(row['Impact']) or row['Year'] >= 2026: + marker = 'd' ax.scatter(row['Year'], row['Days_to_Adoption'], c=color, s=size, marker=marker, edgecolors='white', linewidths=1.5, zorder=3) @@ -118,10 +121,15 @@ def create_chart(df): # Labels ax.set_xlabel('Year', fontsize=11, fontweight='bold') - ax.set_ylabel('Time to ~50M Users', fontsize=11, fontweight='bold') - ax.set_title('Time to Mass Adoption (1957–2026)', + ax.set_ylabel('Time-to-Scale Proxy', fontsize=11, fontweight='bold') + ax.set_title('Time-to-Scale Proxy Across Tech Paradigms (1957–2026)', fontsize=14, fontweight='bold', pad=15) + ax.text(0.01, 0.02, + 'Footnote: metric types differ (users, devices, accounts, organizations, developers). Marker shape flags rough analogies and projections.', + transform=ax.transAxes, fontsize=7, color='#555', + bbox=dict(boxstyle='round,pad=0.3', facecolor='white', edgecolor='#ddd', alpha=0.9)) + plt.tight_layout() return fig diff --git a/adoption-timeline/src/adoption_timeline_plotly.py b/adoption-timeline/src/adoption_timeline_plotly.py index efc2d29..7001563 100644 --- a/adoption-timeline/src/adoption_timeline_plotly.py +++ b/adoption-timeline/src/adoption_timeline_plotly.py @@ -106,8 +106,13 @@ def create_chart(df, export_mode=False): color = CATEGORY_COLORS.get(cat, '#7F8C8D') sizes = [IMPACT_SIZES.get(imp, 10) for imp in cat_df['Impact']] - symbols = ['diamond' if 'Speculative' in str(imp) or yr >= 2026 else 'circle' - for imp, yr in zip(cat_df['Impact'], cat_df['Year'])] + symbols = [] + for _, row in cat_df.iterrows(): + comparability = str(row.get('comparability_level', 'comparable_proxy')) + symbol = 'triangle-up' if comparability == 'rough_analogy' else ('square' if comparability == 'projection' else 'circle') + if 'Speculative' in str(row['Impact']) or row['Year'] >= 2026: + symbol = 'diamond' + symbols.append(symbol) # Rich hover template hover_texts = [ @@ -115,7 +120,10 @@ def create_chart(df, export_mode=False): f"Year: {row['Year']}
" f"Time to 50M: {days_to_readable(row['Days_to_Adoption'])}
" f"Category: {row['Category']}
" - f"Impact: {row['Impact']}" + f"Metric type: {row.get('adoption_metric_type', 'unknown')}
" + f"Comparability: {row.get('comparability_level', 'unknown')}
" + f"Impact: {row['Impact']}
" + f"Notes: {row.get('comparability_notes', '')}" for _, row in cat_df.iterrows() ] @@ -171,8 +179,8 @@ def create_chart(df, export_mode=False): fig.update_layout( title=dict( - text='Time to Mass Adoption
' - 'Days to ~50M Users (1957–2026)', + text='Time-to-Scale Proxy Across Tech Paradigms
' + 'Metric types differ: users, developers, devices, accounts, and organizations', x=0.5, font=dict(size=18) ), @@ -183,7 +191,7 @@ def create_chart(df, export_mode=False): gridcolor='rgba(128,128,128,0.15)' ), yaxis=dict( - title='Days to ~50M Users', + title='Days to scale proxy', type='log', range=[1, 4], gridcolor='rgba(128,128,128,0.2)', @@ -210,8 +218,8 @@ def create_chart(df, export_mode=False): # Caption below chart (not inside canvas) if not export_mode: fig.add_annotation( - text="Adoption times compressed from ~10 years (1957) to ~60 days (ChatGPT). " - "Trend line is visual guide, not causal model. " + text="Footnote: adoption proxies are not directly equivalent across paradigms. " + "Marker shapes distinguish rough analogies and projections; trend line is a visual guide, not causal model. " "Sources: Statista, Asymco, Epoch AI.", xref='paper', yref='paper', x=0.5, y=-0.12, diff --git a/ai-benchmark-progress/index.html b/ai-benchmark-progress/index.html index dcc6258..4091cbe 100644 --- a/ai-benchmark-progress/index.html +++ b/ai-benchmark-progress/index.html @@ -14,14 +14,14 @@

AI Benchmark Progress

AI Scores vs Human Baselines Across Key Benchmarks (2019–2026)

- - AI Benchmark Progress + + AI Benchmark Progress @@ -39,6 +39,9 @@

About

  • Color by capability category
  • Size by impact level (Transformative > High > Medium)
  • + +

    How to Read This Chart

    +

    Benchmark percentages are easier to compare within a benchmark family than across families. Human baselines are reference thresholds, not guarantees of equivalent real-world capability.

    diff --git a/ai-benchmark-progress/output/benchmark_progress.svg b/ai-benchmark-progress/output/benchmark_progress.svg index d2783d6..0cd220c 100644 --- a/ai-benchmark-progress/output/benchmark_progress.svg +++ b/ai-benchmark-progress/output/benchmark_progress.svg @@ -6,11 +6,11 @@ - 2026-04-24T11:58:53.906326 + 2026-04-24T12:53:40.757648 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -42,16 +42,16 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -159,11 +159,11 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -180,11 +180,11 @@ L 102.129051 72.28 +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -201,11 +201,11 @@ L 144.541783 72.28 +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -222,11 +222,11 @@ L 186.954515 72.28 +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -277,11 +277,11 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -319,11 +319,11 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -467,16 +467,16 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -490,11 +490,11 @@ L -3.5 0 +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -509,11 +509,11 @@ L 326.916529 343.502857 +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -528,11 +528,11 @@ L 326.916529 279.685714 +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -579,11 +579,11 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -639,11 +639,11 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -822,7 +822,7 @@ z +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke-dasharray: 5.55,2.4; stroke-dashoffset: 0; stroke: #ef4444; stroke-opacity: 0.7; stroke-width: 1.5"/> +" clip-path="url(#p247cdeb1e9)" style="fill: none; stroke: #60a5fa; stroke-width: 2.5; stroke-linecap: square"/> - - - - - - - - - + + + + + + + + @@ -1259,11 +1259,11 @@ z +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1291,11 +1291,11 @@ z +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1314,11 +1314,11 @@ L 409.151992 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1337,11 +1337,11 @@ L 440.961541 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1360,11 +1360,11 @@ L 472.77109 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1383,11 +1383,11 @@ L 504.580639 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1406,11 +1406,11 @@ L 536.390187 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1429,11 +1429,11 @@ L 568.199736 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1452,11 +1452,11 @@ L 600.009285 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1486,11 +1486,11 @@ L 631.818834 72.28 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1498,11 +1498,11 @@ L 644.542653 407.32 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1510,11 +1510,11 @@ L 644.542653 343.502857 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1522,11 +1522,11 @@ L 644.542653 279.685714 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1534,11 +1534,11 @@ L 644.542653 215.868571 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1546,11 +1546,11 @@ L 644.542653 152.051429 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1558,7 +1558,7 @@ L 644.542653 88.234286 +" clip-path="url(#p6c1e423146)" style="fill: none; stroke-dasharray: 5.55,2.4; stroke-dashoffset: 0; stroke: #ef4444; stroke-opacity: 0.7; stroke-width: 1.5"/> +" clip-path="url(#p6c1e423146)" style="fill: none; stroke: #34d399; stroke-width: 2.5; stroke-linecap: square"/> - - - - - - - + + + + + + @@ -1768,11 +1768,11 @@ z +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1791,11 +1791,11 @@ L 694.968568 72.28 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1814,11 +1814,11 @@ L 745.863846 72.28 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1837,11 +1837,11 @@ L 796.759124 72.28 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1860,11 +1860,11 @@ L 847.654402 72.28 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1883,11 +1883,11 @@ L 898.54968 72.28 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1917,11 +1917,11 @@ L 949.444958 72.28 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1929,11 +1929,11 @@ L 962.168777 407.32 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1941,11 +1941,11 @@ L 962.168777 343.502857 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1953,11 +1953,11 @@ L 962.168777 279.685714 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1965,11 +1965,11 @@ L 962.168777 215.868571 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1977,11 +1977,11 @@ L 962.168777 152.051429 +" clip-path="url(#p564d268577)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1989,7 +1989,7 @@ L 962.168777 88.234286 +" clip-path="url(#p564d268577)" style="fill: none; stroke-dasharray: 5.55,2.4; stroke-dashoffset: 0; stroke: #ef4444; stroke-opacity: 0.7; stroke-width: 1.5"/> +" clip-path="url(#p564d268577)" style="fill: none; stroke: #9ca3af; stroke-width: 2.5; stroke-linecap: square"/> - - - - - - + + + + + @@ -2191,11 +2191,11 @@ z +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2212,11 +2212,11 @@ L 1012.594692 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2233,11 +2233,11 @@ L 1055.007423 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2254,11 +2254,11 @@ L 1097.420155 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2275,11 +2275,11 @@ L 1139.832887 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2296,11 +2296,11 @@ L 1182.245618 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2317,11 +2317,11 @@ L 1224.65835 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2349,11 +2349,11 @@ L 1267.071082 72.28 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2361,11 +2361,11 @@ L 1279.794901 407.32 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2373,11 +2373,11 @@ L 1279.794901 343.502857 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2385,11 +2385,11 @@ L 1279.794901 279.685714 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2397,11 +2397,11 @@ L 1279.794901 215.868571 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2409,11 +2409,11 @@ L 1279.794901 152.051429 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2421,7 +2421,7 @@ L 1279.794901 88.234286 +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke-dasharray: 5.55,2.4; stroke-dashoffset: 0; stroke: #ef4444; stroke-opacity: 0.7; stroke-width: 1.5"/> +" clip-path="url(#pd95a1f13f7)" style="fill: none; stroke: #a78bfa; stroke-width: 2.5; stroke-linecap: square"/> - - - - - - - - - + + + + + + + + @@ -3275,16 +3275,16 @@ z - + - + - + - + diff --git a/ai-benchmark-progress/output/benchmark_progress_highres.png b/ai-benchmark-progress/output/benchmark_progress_highres.png index 13bb7dd..0d8d6e0 100644 Binary files a/ai-benchmark-progress/output/benchmark_progress_highres.png and b/ai-benchmark-progress/output/benchmark_progress_highres.png differ diff --git a/ai-benchmark-progress/output/benchmark_progress_interactive.html b/ai-benchmark-progress/output/benchmark_progress_interactive.html index 1fc8565..a369f15 100644 --- a/ai-benchmark-progress/output/benchmark_progress_interactive.html +++ b/ai-benchmark-progress/output/benchmark_progress_interactive.html @@ -1,7 +1,7 @@ -
    -
    +
    +
    \ No newline at end of file diff --git a/ai-compute-timeline/data/ai_milestones.csv b/ai-compute-timeline/data/ai_milestones.csv index 124b1b6..8e2075f 100644 --- a/ai-compute-timeline/data/ai_milestones.csv +++ b/ai-compute-timeline/data/ai_milestones.csv @@ -1,57 +1,57 @@ -Year,Event,Category,Compute_FLOPs,Parameters,Impact -1904,Vacuum Tube (Fleming) - Enabled electronic switching for computing,Hardware,N/A,N/A,High -1936,Turing Machine (Alan Turing) - Defined computability limits,Theoretical Foundation,N/A,N/A,Transformative -1937,Shannon's Thesis - Boolean logic to electrical circuits (birth of the bit),Theoretical Foundation,N/A,N/A,Transformative -1945,ENIAC - First programmable electronic general-purpose computer,Hardware,Proxy: ~5e2 ops/sec,N/A,High -1947,Transistor (Bell Labs) - Solid-state replacement for tubes,Hardware,N/A,N/A,Transformative -1950,Turing's "Computing Machinery and Intelligence" - Turing Test proposed,Theoretical Foundation,N/A,N/A,High -1956,Dartmouth Conference - "Artificial Intelligence" term coined; birth of AI field,AI Milestone,N/A,N/A,Transformative -1957,Perceptron (Rosenblatt) - Early neural network hardware,Model Release,Proxy: low,N/A,High -1958,Integrated Circuit (Kilby) - Multiple transistors on one chip,Hardware,N/A,N/A,Transformative -1959,Arthur Samuel coins "Machine Learning"; self-improving checkers program,AI Milestone,Proxy: low,N/A,Medium -1965,Moore's Law stated (Gordon Moore) - Transistor doubling prediction,Hardware,N/A,N/A,Transformative -1969,ARPANET - Precursor to Internet; Shakey the Robot (first mobile planner),Infrastructure;Robotics,Proxy: low,N/A,High -1971,Intel 4004 - First commercial microprocessor,Hardware,N/A,N/A,High -1973,First AI Winter begins (~1973-1980) - Funding cuts after overhype,AI Winter,N/A,N/A,Medium -1986,Backpropagation revival (Rumelhart/Hinton) - Enabled multi-layer neural nets,Model/Architecture,N/A,N/A,High -1987,Second AI Winter (~1987-1993) - Expert systems collapse,AI Winter,N/A,N/A,Medium -1997,Deep Blue (IBM) - Brute-force chess victory over Kasparov,AI Milestone,Proxy: ~1e10 ops/sec,N/A,High -2000,Honda ASIMO - Advanced humanoid walking (rule-based),Robotics,N/A,N/A,Medium -2002,Roomba (iRobot) - First mass-market autonomous home robot,Robotics,N/A,N/A,Medium -2006,Hinton et al. rebrand "Deep Learning"; deep belief networks,Model/Architecture,N/A,N/A,High -2006,AWS launch - Cloud computing infrastructure for scale,Infrastructure,N/A,N/A,Transformative -2007,NVIDIA CUDA - GPUs enabled for general-purpose parallel computing,Hardware,N/A,N/A,Transformative -2009,ImageNet dataset (Fei-Fei Li) - 14M+ labeled images fuel vision models,Dataset,N/A,N/A,Transformative -2010,DeepMind founded - Goal to solve intelligence,AI Milestone,N/A,N/A,High -2011,IBM Watson wins Jeopardy!; Siri launches (Apple),AI Milestone,N/A,N/A,High -2012,AlexNet - CNN crushes ImageNet; deep learning breakthrough,Model Release,6.00E+17,60M,Transformative -2012,Google Cat Paper - Unsupervised learning from YouTube videos,Model Release,~1e16,N/A,High -2013,Word2Vec (Google) - Semantic word embeddings,Model/Architecture,N/A,N/A,High -2013,DeepMind DQN - Atari games from pixels (reinforcement learning),Model Release,~1e15,N/A,High -2014,GANs invented (Ian Goodfellow) - Generative adversarial networks,Model/Architecture,N/A,N/A,Transformative -2015,OpenAI founded; TensorFlow open-sourced; ResNet (very deep nets),AI Milestone;Model/Architecture,N/A,N/A,Transformative -2016,AlphaGo (DeepMind) - Defeats Lee Sedol in Go,Model Release,~1e18,N/A,Transformative -2017,Transformers paper ("Attention Is All You Need"),Model/Architecture,N/A,N/A,Transformative -2018,GPT-1 (OpenAI); BERT (Google) - Transformer NLP advances,Model Release,~1e17-1e18,110M-340M,High -2019,GPT-2 (OpenAI) - Emergent scaling behaviors,Model Release,~1e19,1.5B,High -2020,GPT-3 (OpenAI) - 175B params; few-shot learning,Model Release,3.14E+23,175B,Transformative -2020,AlphaFold (DeepMind) - Protein structure prediction breakthrough,Model Release,~1e20+,N/A,Transformative -2021,DALL-E (OpenAI) - Text-to-image generation; Codex (early coding),Generative,High compute,N/A,High -2022,ChatGPT (based on GPT-3.5) - Mass adoption of conversational AI,Model Release,~few e23,N/A,Transformative -2022,Stable Diffusion - Open-source diffusion model revolution,Generative,~1e20+,1B+,Transformative -2023,GPT-4 (OpenAI) - Multimodal; major capability jump,Model Release,~2e25,N/A (est. trillions),Transformative -2023,Gemini 1.0 (Google); Grok-1 (xAI); Llama 2 (Meta open weights),Model Release,1e24-1e25 range,N/A-70B,High -2024,Sora (OpenAI video gen); Claude 3 family (Anthropic); o1 reasoning model,Generative;Reasoning/Agentic,2e25-5e25,N/A- hundreds B,Transformative -2024,Gemini 1.5/2.0; Llama 3.1 405B (Meta); EU AI Act,Model Release;Regulation,4e25 (Llama 3.1 est.),405B,High -2025,Grok-3 (xAI) - Frontier reasoning model,Model Release,>1e26 (est.),N/A,Transformative -2025,o3 / Claude 4 family advances; Gemini 2.5/3; agentic tools scale (Devin etc.),Reasoning/Agentic,1e26-5e26 range,N/A,High -2025,Quantum error-correction milestones (e.g. Willow-like chips),Quantum/Future Speculative,N/A,N/A,Medium -2025,LLaMA 4 Scout (Meta) - 109B MoE open-weights model,Model Release,4e25 (est.),109B total / 17B active,High -2026,Gemini 3.1 Pro (Google) - 77.1% ARC-AGI-2; 2× prior gen; 1M context,Model Release,Speculative 1e26-5e26,N/A (est. ~2T),Transformative -2026,GPT-5.4 (OpenAI) - Released Mar 5 2026; native multimodal; 1M context,Model Release,Speculative 1e26,N/A (est. ~600B),High -2026,Claude Opus 4.7 (Anthropic) - 87.6% SWE-bench Verified; released Apr 16 2026,Reasoning/Agentic,Speculative 1e26,N/A (est. ~800B),Transformative -2026,Claude Mythos Preview (Anthropic) - Invite-only; 73% expert cyber tasks; inflection point,Reasoning/Agentic,Speculative 2e26+,N/A (est. ~3T),Speculative Transformative -2026,GPT Spud / GPT-5.5 (OpenAI) - Pretraining complete Mar 24 2026; in safety eval; ~5-6T MoE,Model Release,Speculative 5e27+,N/A (est. ~5T),Speculative Transformative -2026,Agentic AI & recursive self-improvement loops emerge at scale,Reasoning/Agentic,Speculative 1e27+,N/A,Speculative High -2026,Tesla Optimus production pivot; world models (Genie-like),Robotics;Generative,Speculative,N/A,Speculative High -2026,Omni-modal single models; recursive learning + infinite context,Speculative,Speculative 1e28+,N/A,Speculative Transformative +year,event,category,value_numeric,value_low,value_high,value_unit,estimate_status,source_id,confidence,display_label,notes +1904,Vacuum Tube (Fleming) - enabled electronic switching for computing,Hardware,,,,none,observed,owid,medium,Vacuum Tube,Historical enabling technology; not a training compute point. +1936,Turing Machine (Alan Turing) - defined computability limits,Theoretical Foundation,,,,none,observed,owid,high,Turing Machine,Theoretical milestone; not a training compute point. +1937,Shannon's Thesis - Boolean logic to electrical circuits,Theoretical Foundation,,,,none,observed,owid,high,Shannon,Theoretical milestone; not a training compute point. +1945,ENIAC - first programmable electronic general-purpose computer,Hardware,500,500,500,ops/sec proxy,proxy,kurzweil,medium,ENIAC,Ops/sec proxy shown separately from training FLOPs. +1947,Transistor (Bell Labs),Hardware,,,,none,observed,owid,high,Transistor,Hardware milestone; not a training compute point. +1950,Turing's Computing Machinery and Intelligence,Theoretical Foundation,,,,none,observed,owid,high,Turing Test,Conceptual milestone; not a training compute point. +1956,Dartmouth Conference - artificial intelligence term coined,AI Milestone,,,,none,observed,owid,high,AI Born,Field milestone; not a training compute point. +1957,Perceptron (Rosenblatt),Model Release,1000000,100000,10000000,ops/sec proxy,proxy,kurzweil,low,Perceptron,Proxy for early hardware capability; not training FLOPs. +1958,Integrated Circuit (Kilby),Hardware,,,,none,observed,owid,high,IC,Hardware milestone; not a training compute point. +1959,Arthur Samuel machine learning checkers program,AI Milestone,1000000,100000,10000000,ops/sec proxy,proxy,kurzweil,low,Samuel ML,Proxy for early compute capability. +1965,Moore's Law stated,Hardware,,,,none,observed,owid,high,Moore's Law,Hardware scaling principle; not a training compute point. +1969,ARPANET and Shakey the Robot,Infrastructure;Robotics,1000000,100000,10000000,ops/sec proxy,proxy,kurzweil,low,ARPANET,Proxy for early robotics and network-era compute capability. +1971,Intel 4004 - first commercial microprocessor,Hardware,,,,none,observed,owid,high,Intel 4004,Hardware milestone; not a training compute point. +1973,First AI Winter begins,AI Winter,,,,none,observed,owid,medium,AI Winter I,Funding and expectation milestone. +1986,Backpropagation revival,Model/Architecture,,,,none,observed,owid,high,Backprop,Architecture milestone; not a training compute point. +1987,Second AI Winter begins,AI Winter,,,,none,observed,owid,medium,AI Winter II,Funding and expectation milestone. +1997,Deep Blue defeats Kasparov,AI Milestone,10000000000,10000000000,10000000000,ops/sec proxy,proxy,kurzweil,medium,Deep Blue,Ops/sec proxy; not training FLOPs. +2000,Honda ASIMO,Robotics,,,,none,observed,owid,medium,ASIMO,Robotics milestone. +2002,Roomba mass-market autonomous robot,Robotics,,,,none,observed,owid,medium,Roomba,Robotics adoption milestone. +2006,Deep learning rebrand and deep belief networks,Model/Architecture,,,,none,observed,owid,high,Deep Learning,Architecture milestone. +2006,AWS launch,Infrastructure,,,,none,observed,owid,high,AWS,Infrastructure milestone. +2007,NVIDIA CUDA,Hardware,,,,none,observed,owid,high,CUDA,Hardware/software milestone. +2009,ImageNet dataset,Dataset,,,,none,observed,owid,high,ImageNet,Dataset milestone. +2010,DeepMind founded,AI Milestone,,,,none,observed,owid,high,DeepMind,Institutional milestone. +2011,IBM Watson wins Jeopardy and Siri launches,AI Milestone,,,,none,observed,owid,high,Watson/Siri,AI product milestone. +2012,AlexNet,Model Release,6.00E+17,6.00E+17,6.00E+17,training FLOPs,observed,epoch,high,AlexNet,ImageNet breakthrough. +2012,Google Cat Paper,Model Release,1.00E+16,8.00E+15,1.20E+16,training FLOPs,estimated,epoch,medium,Google Cat,Unsupervised YouTube-scale model estimate. +2013,Word2Vec,Model/Architecture,,,,none,observed,owid,high,Word2Vec,Architecture milestone. +2013,DeepMind DQN,Model Release,1.00E+15,8.00E+14,1.20E+15,training FLOPs,estimated,epoch,medium,DQN,Atari reinforcement learning estimate. +2014,GANs invented,Model/Architecture,,,,none,observed,owid,high,GANs,Architecture milestone. +2015,OpenAI founded TensorFlow open-sourced and ResNet introduced,AI Milestone;Model/Architecture,,,,none,observed,owid,high,OpenAI/ResNet,Institutional and architecture milestone. +2016,AlphaGo defeats Lee Sedol,Model Release,1.00E+18,8.00E+17,1.20E+18,training FLOPs,estimated,epoch,medium,AlphaGo,Training compute estimate. +2017,Transformers paper,Model/Architecture,,,,none,observed,owid,high,Transformers,Architecture milestone. +2018,GPT-1 and BERT,Model Release,3.16E+17,1.00E+17,1.00E+18,training FLOPs,estimated,epoch,medium,GPT-1/BERT,Range midpoint for early transformer models. +2019,GPT-2,Model Release,1.00E+19,8.00E+18,1.20E+19,training FLOPs,estimated,epoch,medium,GPT-2,Estimate. +2020,GPT-3,Model Release,3.14E+23,3.14E+23,3.14E+23,training FLOPs,observed,epoch,high,GPT-3,Published/commonly cited training compute. +2020,AlphaFold,Model Release,1.00E+20,8.00E+19,1.20E+20,training FLOPs,estimated,epoch,medium,AlphaFold,Estimate. +2021,DALL-E and Codex,Generative,1.00E+21,5.00E+20,2.00E+21,training FLOPs,estimated,epoch,low,DALL-E/Codex,Estimate-heavy frontier generation milestone. +2022,ChatGPT based on GPT-3.5,Model Release,3.00E+23,1.00E+23,5.00E+23,training FLOPs,estimated,epoch,medium,ChatGPT,Uses GPT-3.5-era estimate. +2022,Stable Diffusion,Generative,1.00E+20,8.00E+19,1.20E+20,training FLOPs,estimated,epoch,medium,Stable Diffusion,Estimate. +2023,GPT-4,Model Release,2.00E+25,1.00E+25,3.00E+25,training FLOPs,estimated,epoch,medium,GPT-4,Frontier estimate. +2023,Gemini 1.0 Grok-1 and Llama 2,Model Release,3.16E+24,1.00E+24,1.00E+25,training FLOPs,estimated,epoch,low,Gemini/Llama2,Range midpoint across heterogeneous models. +2024,Sora Claude 3 family and o1 reasoning model,Generative;Reasoning/Agentic,3.16E+25,2.00E+25,5.00E+25,training FLOPs,estimated,epoch,low,Sora/Claude3/o1,Cluster estimate. +2024,Gemini 1.5/2.0 and Llama 3.1 405B,Model Release;Regulation,4.00E+25,3.00E+25,5.00E+25,training FLOPs,estimated,epoch,medium,Llama 3.1,Estimate for Llama 3.1 405B and related frontier systems. +2025,Grok-3,Model Release,1.00E+26,1.00E+26,2.00E+26,training FLOPs,projection,source_review_needed,low,Grok-3,Projection pending source review. +2025,o3 Claude 4 family Gemini 2.5/3 and agentic tools,Reasoning/Agentic,2.24E+26,1.00E+26,5.00E+26,training FLOPs,projection,source_review_needed,low,o3/Claude4,Projection over heterogeneous frontier systems. +2025,Quantum error-correction milestones,Quantum/Future Speculative,,,,none,speculative,,low,Quantum,Included as future-facing context; not compute-axis data. +2025,LLaMA 4 Scout,Model Release,4.00E+25,3.00E+25,5.00E+25,training FLOPs,estimated,source_review_needed,low,LLaMA 4 Scout,Estimate pending source review. +2026,Gemini 3.1 Pro,Model Release,2.24E+26,1.00E+26,5.00E+26,training FLOPs,speculative,,speculative,Gemini 3.1,Speculative 2026 projection; visually separated from history. +2026,GPT-5.4,Model Release,1.00E+26,8.00E+25,1.20E+26,training FLOPs,speculative,,speculative,GPT-5.4,Speculative 2026 projection; visually separated from history. +2026,Claude Opus 4.7,Reasoning/Agentic,1.00E+26,8.00E+25,1.20E+26,training FLOPs,speculative,,speculative,Claude 4.7,Speculative 2026 projection; visually separated from history. +2026,Claude Mythos Preview,Reasoning/Agentic,2.00E+26,1.50E+26,3.00E+26,training FLOPs,speculative,,speculative,Claude Mythos,Speculative invite-only preview. +2026,GPT Spud / GPT-5.5,Model Release,5.00E+27,3.00E+27,6.00E+27,training FLOPs,speculative,,speculative,GPT Spud,Speculative pre-release projection. +2026,Agentic AI and recursive self-improvement loops emerge at scale,Reasoning/Agentic,1.00E+27,5.00E+26,2.00E+27,training FLOPs,speculative,,speculative,Agentic AI,Speculative systems-level projection. +2026,Tesla Optimus production pivot and world models,Robotics;Generative,,,,none,speculative,,speculative,Optimus,Speculative robotics context; not compute-axis data. +2026,Omni-modal single models with recursive learning,Speculative,1.00E+28,5.00E+27,2.00E+28,training FLOPs,speculative,,speculative,Omni-modal,Speculative upper frontier projection. diff --git a/ai-compute-timeline/data/meta.json b/ai-compute-timeline/data/meta.json index 428574a..29aeb00 100644 --- a/ai-compute-timeline/data/meta.json +++ b/ai-compute-timeline/data/meta.json @@ -1,35 +1,59 @@ { "title": "AI Compute Timeline", - "description": "Training FLOPs milestones for AI history, from vacuum tubes to frontier models (1900-2026).", + "description": "Training FLOPs milestones for AI history, from early electronic computing to frontier models. Observed, estimated, proxy, speculative, and projection rows are structurally separated.", "fields": { - "Year": "Year of milestone", - "Event": "Name of model/system", - "Category": "Era category (Early Compute, Neural Nets, Deep Learning, etc.)", - "Compute_FLOPs": "Estimated floating-point operations for training or compute", - "Parameters": "Model parameter count where applicable", - "Impact": "Historical impact level (Transformative, High, Medium, Speculative)" + "year": "Year of milestone", + "event": "Milestone or model/system name", + "category": "Era or capability category", + "value_numeric": "Numeric value plotted when applicable", + "value_low": "Lower uncertainty bound when applicable", + "value_high": "Upper uncertainty bound when applicable", + "value_unit": "Unit for value_numeric, such as training FLOPs or ops/sec proxy", + "estimate_status": "One of observed, estimated, proxy, speculative, projection", + "source_id": "Source identifier for auditable rows", + "confidence": "Qualitative confidence for the row", + "display_label": "Short label used in chart annotations", + "notes": "Audit notes and caveats" }, + "estimate_status_values": [ + "observed", + "estimated", + "proxy", + "speculative", + "projection" + ], "sources": [ { + "id": "epoch", "name": "Epoch AI", "url": "https://epochai.org/", "accessed": "2026-01", "notes": "Primary source for training compute estimates" }, { + "id": "owid", "name": "Our World in Data", "url": "https://ourworldindata.org/artificial-intelligence", "accessed": "2026-01", "notes": "Historical AI milestones" }, { + "id": "kurzweil", "name": "Kurzweil (2005)", "url": "https://www.singularity.com/", "accessed": "2026-01", - "notes": "Early compute estimates, price-performance trends" + "notes": "Early compute and price-performance trend references" + }, + { + "id": "source_review_needed", + "name": "Source review needed", + "url": "https://github.com/mschwar/plots", + "accessed": "2026-04", + "notes": "Estimate or projection retained for chart continuity but flagged for source review" } ], - "transformations": "Log10(FLOPs) for y-axis. Pre-2010 estimates are approximate (dashed line region).", + "transformations": "Log10 values for y-axis. Ops/sec proxies and no-unit milestones are visually separated from training FLOPs. Speculative and projection rows are marker-coded and can be hidden in the Plotly chart.", "created": "2026-01", + "last_updated": "2026-04-24", "author": "mschwar" } diff --git a/ai-compute-timeline/index.html b/ai-compute-timeline/index.html index b9e2b1f..21ed012 100644 --- a/ai-compute-timeline/index.html +++ b/ai-compute-timeline/index.html @@ -12,9 +12,9 @@

    AI Compute Timeline

    -

    Training FLOPs for Key AI Milestones (1900–2026)

    +

    Training FLOPs, proxies, estimates, and speculative projections (1900–2026)

    - + AI Compute Timeline @@ -32,13 +32,15 @@

    AI Compute Timeline

    About

    -

    Semi-log timeline showing exponential growth in AI training compute from early vacuum tubes to frontier models.

    +

    Semi-log timeline showing growth in AI training compute from early vacuum tubes to frontier models, while keeping ops/sec proxies and speculative points visually distinct.

      -
    • 50 milestones from 1904 to 2026
    • -
    • Color by category (Hardware, Model, Infrastructure)
    • -
    • Size by impact (Transformative > High > Medium)
    • -
    • Dashed pre-2010 = proxy values; solid post-2010 = actual training FLOPs
    • +
    • Observed, estimated, proxy, projection, and speculative rows are separate data states
    • +
    • Static chart is split into 1900–2011 foundations and 2012–2026 frontier scaling
    • +
    • Plotly hover text carries dense labels and source caveats
    + +

    How to Read This Chart

    +

    Circle markers are observed or estimated history. Squares are proxies. Triangles and diamonds are projections or speculative rows. Use the legend to hide or show speculative points in the interactive chart.

    diff --git a/ai-compute-timeline/output/ai_compute_timeline.png b/ai-compute-timeline/output/ai_compute_timeline.png index 821fd8c..72b8077 100644 Binary files a/ai-compute-timeline/output/ai_compute_timeline.png and b/ai-compute-timeline/output/ai_compute_timeline.png differ diff --git a/ai-compute-timeline/output/ai_compute_timeline.svg b/ai-compute-timeline/output/ai_compute_timeline.svg index bdedb6f..8c48e4e 100644 --- a/ai-compute-timeline/output/ai_compute_timeline.svg +++ b/ai-compute-timeline/output/ai_compute_timeline.svg @@ -1,16 +1,16 @@ - + - 2026-04-24T11:58:32.329547 + 2026-04-24T12:52:03.351292 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -21,152 +21,50 @@ - - - - - - - - - - - - - - - - - - - - - - - - +" clip-path="url(#pe9de4be264)" style="fill: #f0f0f0; opacity: 0.35; stroke: #f0f0f0; stroke-linejoin: miter"/> - - + + - + - - + - + - - + + - + - + - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - + + + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - + + + - + - + - + - + @@ -644,147 +364,9 @@ L 750.204752 61.173875 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + - - + + - + - - + - - - - - - + + + + + + - - + + - + - + - - - - - - + + + + + + + + + - - - - - - - - - - - - - - - - - - - - + + - + - + - - - + + + - - - + + + - + - + - - - + + + + + + - + - - - + + + - + - + - - - + + + + + + - - - - - - - - - - - - - - - - - - - - + - - - + + + - + - + - - - + + + @@ -1051,62 +665,105 @@ L 941.331106 229.136358 - - - + + + - + - + - - - + + + - + - - - + + + - + - + - - - + + + - + - - - + + + - + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + - - + - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - + - + - - + + - - - - - - - - - - - - - - - - - - - - - - + + - - - - + + - - - - + + - - - - + + - - - - + + - - - - + + - - - - + + - - - - + + - - - - - - - + + - - - - + + - - - - - - - + + - - - - + + - - - - + + - - - - + + - - - - + + - - - - + + - - - - - - - + + - - - - + + - - - - - - - + + - - - - - - - + + + + + + - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + - - + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - - + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + - - - + + + + + + - - - + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + + + + + + + - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + + + + - - - - - + - - - - - - - - - - - - - - - - +" clip-path="url(#p585db431c4)" style="fill: #f0f0f0; opacity: 0.35; stroke: #f0f0f0; stroke-linejoin: miter"/> - - + + - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - + + + + + + + + + + + + - - - - - - - + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - + + + + + + - - - - - - - - - + + - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - + + + + + + - - - - - - - - - - - - - - - - + + - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - + + + - - - - - + + - - - - - +"/> + + + - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + - - + + - - - + + + - - - - + + + + + + + + + + - - + + - - - + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - + + + + + - - + + - - - - - - - - + + + + + + + + + + + + + - - + + - - - + + + - - - - - - - - - - + + + + + + + + + + - - + + - - - - - - - - - + + + + + + + + + + + + + - - + + - - - - - - - - + + + + + + + + + + + + + + + + + + + + - - + + - - - - - - - - - - - - - - + + + + + + + + + + + + + - - + + - - - + + + - - - - - - - - - - - - - - - - - - - - - - - - + + + - - + + - + - + @@ -5650,696 +3260,664 @@ Q 921.510299 123.416101 920.853282 124.159806 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - +" transform="scale(0.015625)"/> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - + + + - - + + + + + + + + + + + + + + + + + + + + + - +" transform="scale(0.015625)"/> - - - + + + + + + + + + + + + + + + + + + - - - - - - - - + + + + + + + + + - - + - - - - - - - - - - - - - - - - - +" style="stroke: #ffffff; stroke-linejoin: miter"/> + + + - - + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - +" style="stroke: #ffffff; stroke-linejoin: miter"/> + + + - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - - + + + + + diff --git a/ai-compute-timeline/output/ai_compute_timeline_highres.png b/ai-compute-timeline/output/ai_compute_timeline_highres.png index 7d94656..41077ea 100644 Binary files a/ai-compute-timeline/output/ai_compute_timeline_highres.png and b/ai-compute-timeline/output/ai_compute_timeline_highres.png differ diff --git a/ai-compute-timeline/output/ai_compute_timeline_interactive.html b/ai-compute-timeline/output/ai_compute_timeline_interactive.html index e210f56..3888580 100644 --- a/ai-compute-timeline/output/ai_compute_timeline_interactive.html +++ b/ai-compute-timeline/output/ai_compute_timeline_interactive.html @@ -1,7 +1,7 @@ -
    -
    +
    +
    \ No newline at end of file diff --git a/ai-compute-timeline/src/ai_compute_timeline.py b/ai-compute-timeline/src/ai_compute_timeline.py index 709460e..052f1a3 100644 --- a/ai-compute-timeline/src/ai_compute_timeline.py +++ b/ai-compute-timeline/src/ai_compute_timeline.py @@ -1,497 +1,175 @@ #!/usr/bin/env python3 -""" -History of Compute & Intelligence: Training FLOPs for Key AI Milestones (1900-2026) -Publication-quality semi-log timeline visualization -""" +"""Static AI compute timeline with separated estimate statuses.""" +from __future__ import annotations + +import os + +import matplotlib.pyplot as plt import numpy as np import pandas as pd -import matplotlib.pyplot as plt -import matplotlib.patches as mpatches from matplotlib.lines import Line2D -import os -import re - - -def load_data(): - """Load data from CSV file.""" - script_dir = os.path.dirname(os.path.abspath(__file__)) - csv_path = os.path.join(os.path.dirname(script_dir), 'data', 'ai_milestones.csv') - return pd.read_csv(csv_path) - - -def parse_flops(value): - """Convert FLOPs string to numeric value.""" - if not value or value == 'N/A': - return None - - value = str(value).strip() - - # Handle speculative values - if 'Speculative' in value: - match = re.search(r'(\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - return 1e27 # Default speculative - - # Handle "High compute" or similar vague terms - if 'High compute' in value or value == 'Speculative': - return 1e21 - - # Handle proxy values - if 'Proxy' in value: - match = re.search(r'(\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - if 'low' in value.lower(): - return 1e6 - return 1e3 - - # Handle ranges like "1e24-1e25" or "~1e17-1e18" - range_match = re.search(r'(\d+\.?\d*)e(\d+)[-–](\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if range_match: - low = float(f"{range_match.group(1)}e{range_match.group(2)}") - high = float(f"{range_match.group(3)}e{range_match.group(4)}") - return np.sqrt(low * high) # Geometric mean - - # Handle ">1e26" format - if value.startswith('>'): - match = re.search(r'(\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - - # Handle "~few e23" format - if 'few' in value.lower(): - match = re.search(r'e(\d+)', value, re.IGNORECASE) - if match: - return 3 * 10**int(match.group(1)) - - # Handle "4e25 (Llama...)" format - match = re.search(r'^~?(\d+\.?\d*)[eE](\d+)', value) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - - # Handle scientific notation like "6.00E+17" or "3.14E+23" - match = re.search(r'(\d+\.?\d*)[eE]\+?(\d+)', value) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - # Handle "~1e20+" format - match = re.search(r'~?(\d+\.?\d*)e(\d+)\+?', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - return None - - -def get_primary_category(cat_str): - """Get the primary category from potentially multiple categories.""" - if not cat_str: - return 'Other' - categories = cat_str.split(';') - return categories[0].strip() - - -def parse_data(df): - """Parse DataFrame into structured format.""" - records = [] - for _, row in df.iterrows(): - year = int(row['Year']) - record = { - 'year': year, - 'event': str(row['Event']), - 'category': str(row['Category']), - 'flops_raw': str(row['Compute_FLOPs']), - 'parameters': str(row['Parameters']) if pd.notna(row['Parameters']) else 'N/A', - 'impact': str(row['Impact']) if pd.notna(row['Impact']) else 'Medium' - } - record['flops'] = parse_flops(record['flops_raw']) - record['primary_category'] = get_primary_category(record['category']) - records.append(record) - - return records - - -# Category color mapping -CATEGORY_COLORS = { - 'Hardware': '#E67E22', # Orange - 'Theoretical Foundation': '#7F8C8D', # Gray - 'AI Milestone': '#16A085', # Teal - 'Model Release': '#8E44AD', # Purple - 'Model/Architecture': '#9B59B6', # Lighter purple - 'Dataset': '#27AE60', # Green - 'Robotics': '#E74C3C', # Red - 'AI Winter': '#BDC3C7', # Light gray - 'Infrastructure': '#8B4513', # Brown - 'Generative': '#FF69B4', # Pink - 'Reasoning/Agentic': '#1D8348', # Dark green - 'Quantum/Future Speculative': '#9B59B6', # Purple - 'Speculative': '#9B59B6', # Purple - 'Other': '#3498DB' # Blue +STATUS_STYLES = { + "observed": {"marker": "o", "alpha": 0.9, "label": "Observed"}, + "estimated": {"marker": "o", "alpha": 0.75, "label": "Estimated"}, + "proxy": {"marker": "s", "alpha": 0.55, "label": "Proxy"}, + "projection": {"marker": "^", "alpha": 0.5, "label": "Projection"}, + "speculative": {"marker": "D", "alpha": 0.45, "label": "Speculative"}, } -# Impact to marker size mapping -IMPACT_SIZES = { - 'Transformative': 180, - 'Speculative Transformative': 160, - 'High': 100, - 'Speculative High': 80, - 'Medium': 50, - 'Low': 30 +CATEGORY_COLORS = { + "Hardware": "#E67E22", + "Theoretical Foundation": "#7F8C8D", + "AI Milestone": "#16A085", + "Model Release": "#8E44AD", + "Model/Architecture": "#9B59B6", + "Dataset": "#27AE60", + "Robotics": "#E74C3C", + "AI Winter": "#BDC3C7", + "Infrastructure": "#8B4513", + "Generative": "#FF69B4", + "Reasoning/Agentic": "#1D8348", + "Quantum/Future Speculative": "#9B59B6", + "Speculative": "#9B59B6", } -def get_short_label(event): - """Extract short label for annotation.""" - # Key events to label - labels = { - 'Vacuum Tube': 'Vacuum Tube', - 'Turing Machine': 'Turing Machine', - 'Shannon': 'Shannon', - 'ENIAC': 'ENIAC', - 'Transistor': 'Transistor', - 'Dartmouth': 'AI Born', - 'Perceptron': 'Perceptron', - 'Integrated Circuit': 'IC', - 'Moore': "Moore's Law", - 'Intel 4004': '4004', - 'First AI Winter': 'AI Winter I', - 'Backpropagation': 'Backprop', - 'Second AI Winter': 'AI Winter II', - 'Deep Blue': 'Deep Blue', - 'CUDA': 'CUDA', - 'ImageNet dataset': 'ImageNet', - 'AWS launch': 'AWS', - 'AlexNet': 'AlexNet', - 'DQN': 'DQN', - 'GANs': 'GANs', - 'AlphaGo': 'AlphaGo', - 'Transformers': 'Transformers', - 'GPT-1': 'GPT-1/BERT', - 'GPT-2': 'GPT-2', - 'GPT-3': 'GPT-3', - 'AlphaFold': 'AlphaFold', - 'DALL-E': 'DALL-E', - 'ChatGPT': 'ChatGPT', - 'Stable Diffusion': 'Stable Diff', - 'GPT-4': 'GPT-4', - 'Gemini 1.0': 'Gemini/Llama2', - 'Sora': 'Sora/Claude3/o1', - 'Llama 3.1': 'Llama 3.1', - 'Grok-3': 'Grok-3', - 'o3': 'o3/Claude4', - 'Quantum': 'Quantum', - 'Agentic AI': 'Agentic AI', - 'Optimus': 'Optimus', - 'Omni-modal': 'Omni-modal' - } - - for key, label in labels.items(): - if key in event: - return label - return None - - -def create_timeline_plot(records): - """Create the main timeline visualization.""" - - # Set up the figure - fig, ax = plt.subplots(figsize=(16, 10), dpi=150) - - # Set background - ax.set_facecolor('#FAFAFA') - fig.patch.set_facecolor('white') - - # Add era shading - eras = [ - (1900, 1940, '#F5F5F5', 'Mechanical & Theoretical'), - (1940, 1960, '#E8F4FD', 'Electronic Dawn'), - (1960, 2000, '#E8F8E8', "Moore's Law Scaling"), - (2000, 2012, '#FFF8E8', 'Parallel & Early Deep'), - (2012, 2022, '#F0E8FF', 'Deep Learning Big Bang'), - (2022, 2027, '#FFE8E8', 'Reasoning & Agentic Era') - ] - - for start, end, color, label in eras: - ax.axvspan(start, end, alpha=0.5, color=color, zorder=0) - - # Separate records with and without FLOPs - records_with_flops = [r for r in records if r['flops'] is not None] - records_without_flops = [r for r in records if r['flops'] is None] - - # Assign proxy values to records without FLOPs based on year - for r in records_without_flops: - if r['year'] < 1945: - r['flops'] = 1e2 - elif r['year'] < 1960: - r['flops'] = 1e4 - elif r['year'] < 1980: - r['flops'] = 1e6 - elif r['year'] < 2000: - r['flops'] = 1e8 - elif r['year'] < 2010: - r['flops'] = 1e10 - else: - r['flops'] = 1e12 - - all_records = records_with_flops + records_without_flops - all_records.sort(key=lambda x: x['year']) - - # Extract data for plotting - years = [r['year'] for r in all_records] - flops = [r['flops'] for r in all_records] - colors = [CATEGORY_COLORS.get(r['primary_category'], '#3498DB') for r in all_records] - sizes = [IMPACT_SIZES.get(r['impact'], 50) for r in all_records] - - # Plot connecting line (thin) - split pre/post 2010 to show discontinuity - # Pre-2010: dashed line (proxy values, not directly comparable) - pre_2010 = [(y, f) for y, f, r in zip(years, flops, all_records) if r['year'] < 2010] - post_2010 = [(y, f) for y, f, r in zip(years, flops, all_records) if r['year'] >= 2010] - - if pre_2010: - pre_years, pre_flops = zip(*pre_2010) - ax.plot(pre_years, pre_flops, '--', color='#2C3E50', alpha=0.25, linewidth=1, zorder=1) - - if post_2010: - post_years, post_flops = zip(*post_2010) - ax.plot(post_years, post_flops, '-', color='#2C3E50', alpha=0.4, linewidth=1.2, zorder=1) - - # Plot scatter points - for i, r in enumerate(all_records): - marker = 'o' - edgecolor = 'white' - alpha = 1.0 - - # Special markers for speculative/future - if 'Speculative' in r['impact'] or r['year'] >= 2026: - marker = 'd' # Diamond for speculative - edgecolor = '#333333' - alpha = 0.7 - - # Special for AI Winter - if 'Winter' in r['category']: - marker = 'v' # Triangle down - - ax.scatter(r['year'], r['flops'], - c=colors[i], s=sizes[i], - marker=marker, alpha=alpha, - edgecolors=edgecolor, linewidths=1.5, - zorder=3) - - # Add annotations for key events - annotations = [] - for r in all_records: - label = get_short_label(r['event']) - if label: - annotations.append({ - 'year': r['year'], - 'flops': r['flops'], - 'label': label, - 'impact': r['impact'] - }) - - # Add labels with smart positioning - custom offsets per label - # Format: (x_offset, y_multiplier, rotation) - # Negative x_offset = label to left of point - label_positions = { - 'Vacuum Tube': (2, 3, 25), - 'Turing Machine': (2, 4, 30), - 'Shannon': (-8, 0.3, -30), - 'ENIAC': (2, 3, 25), - 'Transistor': (2, 4, 30), - 'AI Born': (2, 3, 25), - 'Perceptron': (-6, 0.4, -25), - 'IC': (2, 3, 25), - "Moore's Law": (2, 3, 30), - '4004': (-5, 0.4, -20), - 'AI Winter I': (2, 3, 25), - 'Backprop': (2, 4, 30), - 'AI Winter II': (-6, 0.3, -20), - 'Deep Blue': (2, 4, 30), - 'CUDA': (1, 4, 35), - 'ImageNet': (1, 3, 30), - 'AWS': (-5, 0.3, -25), - 'AlexNet': (1.5, 3, 35), - 'DQN': (-4, 0.3, -20), - 'GANs': (-5, 0.4, -25), - 'AlphaGo': (1, 3, 35), - 'Transformers': (-5, 0.3, -30), - 'GPT-1/BERT': (1, 3, 30), - 'GPT-2': (1, 2.5, 30), - 'GPT-3': (0.8, 2.5, 35), - 'AlphaFold': (-3.5, 0.35, -25), - 'DALL-E': (-3.5, 0.45, -20), - 'ChatGPT': (0.6, 3, 45), - 'Stable Diff': (-3.5, 0.25, -35), - 'GPT-4': (0.5, 2.2, 50), - 'Gemini/Llama2': None, # Skip - covered by cluster label - 'Sora/Claude3/o1': (0.4, 2, 50), - 'Llama 3.1': None, # Skip - covered by cluster label - 'Grok-3': (0.4, 1.8, 50), - 'o3/Claude4': (-1.8, 0.55, -40), - 'Quantum': (-3.5, 0.4, -30), - 'Agentic AI': (0.3, 1.6, 55), - 'Optimus': (-1.5, 0.6, -40), - 'Omni-modal': (0.3, 1.5, 55), - } - - # Labels to skip (covered by cluster annotation) - skip_labels = {'Gemini/Llama2', 'Llama 3.1'} - - for ann in annotations: - # Skip labels covered by cluster annotation - if ann['label'] in skip_labels: - continue - - pos = label_positions.get(ann['label'], (1.5, 2.5, 30)) - if pos is None: +def load_data() -> pd.DataFrame: + script_dir = os.path.dirname(os.path.abspath(__file__)) + csv_path = os.path.join(os.path.dirname(script_dir), "data", "ai_milestones.csv") + df = pd.read_csv(csv_path) + for column in ("value_numeric", "value_low", "value_high"): + df[column] = pd.to_numeric(df[column], errors="coerce") + return df + + +def primary_category(value: str) -> str: + return str(value).split(";")[0].strip() + + +def plotted_value(row: pd.Series) -> float: + if np.isfinite(row["value_numeric"]): + return row["value_numeric"] + year = row["year"] + if year < 1945: + return 1e2 + if year < 1960: + return 1e4 + if year < 1980: + return 1e6 + if year < 2000: + return 1e8 + if year < 2012: + return 1e10 + return 1e12 + + +def plot_panel(ax, df: pd.DataFrame, title: str, xlim: tuple[int, int], label_limit: int) -> None: + panel = df[(df["year"] >= xlim[0]) & (df["year"] <= xlim[1])].copy() + panel["plot_value"] = panel.apply(plotted_value, axis=1) + + ax.set_facecolor("#FAFAFA") + ax.axvspan(xlim[0], min(2022, xlim[1]), color="#F0F0F0", alpha=0.35, zorder=0) + if xlim[1] >= 2022: + ax.axvspan(2022, min(2026, xlim[1]), color="#F0E8FF", alpha=0.35, zorder=0) + ax.axvspan(2026, xlim[1], color="#FFE8E8", alpha=0.45, zorder=0) + ax.axvline(2022, color="#666", linestyle="--", linewidth=1) + ax.axvline(2026, color="#b91c1c", linestyle="--", linewidth=1) + ax.text(2022.15, 3e28, "current frontier", fontsize=8, color="#555") + ax.text(2026.05, 3e28, "speculative", fontsize=8, color="#8a1f1f") + + for status, style in STATUS_STYLES.items(): + status_df = panel[panel["estimate_status"] == status] + if status_df.empty: continue - - x_offset, y_mult, rotation = pos - - fontsize = 8 - if 'Transformative' in ann['impact']: - fontsize = 9 - - ha = 'left' if x_offset >= 0 else 'right' - - ax.annotate(ann['label'], - xy=(ann['year'], ann['flops']), - xytext=(ann['year'] + x_offset, ann['flops'] * y_mult), - fontsize=fontsize, - ha=ha, - rotation=rotation, - alpha=0.85, - arrowprops=dict(arrowstyle='-', color='gray', alpha=0.4, lw=0.5), - zorder=4) - - # Configure axes - ax.set_yscale('log') - ax.set_xlim(1898, 2028) + colors = [CATEGORY_COLORS.get(primary_category(cat), "#3498DB") for cat in status_df["category"]] + ax.scatter( + status_df["year"], + status_df["plot_value"], + c=colors, + s=95, + marker=style["marker"], + alpha=style["alpha"], + edgecolors="white", + linewidths=1.2, + label=style["label"], + zorder=3, + ) + bounded = status_df.dropna(subset=["value_low", "value_high"]) + if not bounded.empty: + ax.vlines( + bounded["year"], + bounded["value_low"], + bounded["value_high"], + colors="#444", + alpha=0.25, + linewidth=1.5, + zorder=2, + ) + + label_rows = panel.dropna(subset=["display_label"]) + label_rows = label_rows[label_rows["display_label"].astype(str).str.strip() != ""] + label_rows = label_rows.sort_values("plot_value", ascending=False).head(label_limit) + for _, row in label_rows.iterrows(): + ax.annotate( + row["display_label"], + xy=(row["year"], row["plot_value"]), + xytext=(4, 8), + textcoords="offset points", + fontsize=8, + arrowprops=dict(arrowstyle="-", color="#777", lw=0.5, alpha=0.5), + ) + + ax.set_title(title, fontsize=13, fontweight="bold") + ax.set_yscale("log") + ax.set_xlim(*xlim) ax.set_ylim(1e1, 1e29) + ax.grid(True, which="major", axis="y", alpha=0.25) + ax.grid(True, which="major", axis="x", alpha=0.15) - # X-axis configuration - ax.set_xlabel('Year', fontsize=14, fontweight='bold') - major_ticks = range(1900, 2030, 10) - minor_ticks = range(1900, 2030, 5) - ax.set_xticks(major_ticks) - ax.set_xticks(minor_ticks, minor=True) - ax.tick_params(axis='x', labelsize=11) - # Y-axis configuration - ax.set_ylabel('Total Training Compute (FLOPs, log₁₀)', fontsize=14, fontweight='bold') - y_ticks = [10.0**i for i in range(0, 30, 3)] - ax.set_yticks(y_ticks) - ax.set_yticklabels([f'$10^{{{int(np.log10(float(y)))}}}$' for y in y_ticks], fontsize=10) - ax.tick_params(axis='y', labelsize=10) +def create_chart(df: pd.DataFrame): + fig, axes = plt.subplots(1, 2, figsize=(16, 7), dpi=150, sharey=True) + fig.patch.set_facecolor("white") - # Grid - ax.grid(True, which='major', axis='y', linestyle='-', alpha=0.3, color='gray') - ax.grid(True, which='minor', axis='y', linestyle=':', alpha=0.2, color='gray') - ax.grid(True, which='major', axis='x', linestyle='-', alpha=0.2, color='gray') + plot_panel(axes[0], df, "1900-2011: historical foundations and proxies", (1900, 2011), 12) + plot_panel(axes[1], df, "2012-2026: frontier training compute", (2012, 2027), 10) - # Title - ax.set_title('History of Compute & Intelligence:\nTraining FLOPs for Key AI Milestones (1900–2026)', - fontsize=18, fontweight='bold', pad=20) + axes[0].set_ylabel("Value (log scale; unit depends on marker)", fontweight="bold") + for ax in axes: + ax.set_xlabel("Year", fontweight="bold") - # Add Moore's Law reference line (doubling every 2 years ≈ 0.15 log10/year) - moore_years = np.array([1965, 2005]) - moore_start = 1e6 # Rough starting point - moore_flops = moore_start * (2 ** ((moore_years - 1965) / 2)) - ax.plot(moore_years, moore_flops, '--', color='#E67E22', alpha=0.6, linewidth=2, - label="Moore's Law trajectory") - - # Create legend - # Category legend - category_handles = [] - for cat, color in CATEGORY_COLORS.items(): - if any(r['primary_category'] == cat for r in all_records): - category_handles.append(mpatches.Patch(color=color, label=cat)) - - # Impact size legend - size_handles = [ - Line2D([0], [0], marker='o', color='w', markerfacecolor='gray', - markersize=np.sqrt(IMPACT_SIZES['Transformative']/3), label='Transformative'), - Line2D([0], [0], marker='o', color='w', markerfacecolor='gray', - markersize=np.sqrt(IMPACT_SIZES['High']/3), label='High'), - Line2D([0], [0], marker='o', color='w', markerfacecolor='gray', - markersize=np.sqrt(IMPACT_SIZES['Medium']/3), label='Medium'), - Line2D([0], [0], marker='d', color='w', markerfacecolor='gray', - markersize=8, label='Speculative', markeredgecolor='black') + handles = [ + Line2D([0], [0], marker=style["marker"], color="w", markerfacecolor="#6b7280", + markeredgecolor="white", markersize=8, label=style["label"]) + for style in STATUS_STYLES.values() ] - - # Moore's Law line - moore_handle = Line2D([0], [0], linestyle='--', color='#E67E22', - linewidth=2, label="Moore's Law") - - # Position legends - leg1 = ax.legend(handles=category_handles, loc='upper left', - fontsize=8, title='Category', title_fontsize=9, - framealpha=0.9, bbox_to_anchor=(1.01, 1)) - ax.add_artist(leg1) - - leg2 = ax.legend(handles=size_handles + [moore_handle], loc='lower left', - fontsize=8, title='Impact / Type', title_fontsize=9, - framealpha=0.9, bbox_to_anchor=(1.01, 0)) - - # Add era labels at top - for start, end, color, label in eras: - mid = (start + end) / 2 - ax.text(mid, 3e28, label, ha='center', va='bottom', fontsize=7, - rotation=0, alpha=0.7, style='italic') - - # Add "2023-2025 Frontier Cluster" bracket annotation - ax.annotate('', xy=(2022.8, 1e24), xytext=(2022.8, 5e26), - arrowprops=dict(arrowstyle='-[, widthB=1.5', color='#666', lw=1.5)) - ax.text(2022.3, 7e24, '2023–25\nFrontier\nCluster\n(10²⁴–10²⁶)', - fontsize=7, ha='right', va='center', color='#444', style='italic') - - # Add note box with data source credit - note_text = ("Log scale: exponential growth appears as straight lines.\n" - "Pre-2010 values are rough proxies (ops/sec equivalents, not directly comparable).\n" - "Dashed line indicates proxy era; solid line = actual training compute.\n" - "Speculative 2026+ points marked with diamonds.\n" - "Sources: Epoch AI, Our World in Data, scaling reports. Estimates as of Jan 2026.") - - props = dict(boxstyle='round,pad=0.5', facecolor='white', alpha=0.8, edgecolor='gray') - ax.text(0.02, 0.02, note_text, transform=ax.transAxes, fontsize=7, - verticalalignment='bottom', bbox=props, family='sans-serif') - - plt.tight_layout() - plt.subplots_adjust(right=0.82) - - return fig, ax - - -def main(): - # Determine output directory + fig.legend(handles=handles, loc="upper center", ncol=5, frameon=False, bbox_to_anchor=(0.5, 0.98)) + fig.suptitle("AI Compute Timeline", fontsize=18, fontweight="bold", y=1.04) + fig.text( + 0.5, + 0.01, + "Footnote: training FLOPs, ops/sec proxies, no-unit milestones, estimates, projections, and speculative rows are separated by marker and data fields. Dense labels are moved into hover text in the interactive chart.", + ha="center", + fontsize=8, + color="#555", + ) + plt.tight_layout(rect=[0, 0.04, 1, 0.93]) + return fig + + +def main() -> None: script_dir = os.path.dirname(os.path.abspath(__file__)) - output_dir = os.path.join(os.path.dirname(script_dir), 'output') + output_dir = os.path.join(os.path.dirname(script_dir), "output") os.makedirs(output_dir, exist_ok=True) - - # Load and parse data - df = load_data() - records = parse_data(df) - print(f"Parsed {len(records)} records") - - # Create plot - fig, ax = create_timeline_plot(records) - - # Save outputs - fig.savefig(os.path.join(output_dir, 'ai_compute_timeline.png'), dpi=300, - bbox_inches='tight', facecolor='white', edgecolor='none') - print("Saved: ai_compute_timeline.png (300 DPI)") - - fig.savefig(os.path.join(output_dir, 'ai_compute_timeline.svg'), format='svg', - bbox_inches='tight', facecolor='white', edgecolor='none') - print("Saved: ai_compute_timeline.svg") - - fig.savefig(os.path.join(output_dir, 'ai_compute_timeline_highres.png'), dpi=400, - bbox_inches='tight', facecolor='white', edgecolor='none') - print("Saved: ai_compute_timeline_highres.png (400 DPI)") - - print("\nDone!") + fig = create_chart(load_data()) + fig.savefig(os.path.join(output_dir, "ai_compute_timeline.png"), dpi=300, bbox_inches="tight", facecolor="white") + fig.savefig(os.path.join(output_dir, "ai_compute_timeline_highres.png"), dpi=400, bbox_inches="tight", facecolor="white") + fig.savefig(os.path.join(output_dir, "ai_compute_timeline.svg"), format="svg", bbox_inches="tight", facecolor="white") + print("Saved AI compute static outputs") -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/ai-compute-timeline/src/ai_compute_timeline_plotly.py b/ai-compute-timeline/src/ai_compute_timeline_plotly.py index e7148b4..8602363 100644 --- a/ai-compute-timeline/src/ai_compute_timeline_plotly.py +++ b/ai-compute-timeline/src/ai_compute_timeline_plotly.py @@ -1,348 +1,164 @@ #!/usr/bin/env python3 -""" -History of Compute & Intelligence: Training FLOPs for Key AI Milestones (1900-2026) -Interactive Plotly version with hover tooltips -""" +"""Interactive AI compute timeline with speculative filtering.""" -import numpy as np -import pandas as pd -import os -import re - -try: - import plotly.graph_objects as go - from plotly.subplots import make_subplots - PLOTLY_AVAILABLE = True -except ImportError: - PLOTLY_AVAILABLE = False - print("Plotly not installed. Run: pip install plotly") - - -def load_data(): - """Load data from CSV file.""" - script_dir = os.path.dirname(os.path.abspath(__file__)) - csv_path = os.path.join(os.path.dirname(script_dir), 'data', 'ai_milestones.csv') - return pd.read_csv(csv_path) - - -def parse_flops(value): - """Convert FLOPs string to numeric value.""" - if not value or value == 'N/A': - return None - value = str(value).strip() - if 'Speculative' in value: - match = re.search(r'(\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - return 1e27 - if 'High compute' in value or value == 'Speculative': - return 1e21 - if 'Proxy' in value: - match = re.search(r'(\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - if 'low' in value.lower(): - return 1e6 - return 1e3 - range_match = re.search(r'(\d+\.?\d*)e(\d+)[-–](\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if range_match: - low = float(f"{range_match.group(1)}e{range_match.group(2)}") - high = float(f"{range_match.group(3)}e{range_match.group(4)}") - return np.sqrt(low * high) - if value.startswith('>'): - match = re.search(r'(\d+\.?\d*)e(\d+)', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - if 'few' in value.lower(): - match = re.search(r'e(\d+)', value, re.IGNORECASE) - if match: - return 3e23 # Use float notation instead of 3 * 10**23 - match = re.search(r'^~?(\d+\.?\d*)[eE](\d+)', value) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - match = re.search(r'(\d+\.?\d*)[eE]\+?(\d+)', value) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - match = re.search(r'~?(\d+\.?\d*)e(\d+)\+?', value, re.IGNORECASE) - if match: - return float(f"{match.group(1)}e{match.group(2)}") - return None - - -def get_primary_category(cat_str): - if not cat_str: - return 'Other' - return cat_str.split(';')[0].strip() +from __future__ import annotations +import os -def parse_data(df): - """Parse DataFrame into structured format.""" - records = [] - for _, row in df.iterrows(): - year = int(row['Year']) - record = { - 'year': year, - 'event': str(row['Event']), - 'category': str(row['Category']), - 'flops_raw': str(row['Compute_FLOPs']), - 'parameters': str(row['Parameters']) if pd.notna(row['Parameters']) else 'N/A', - 'impact': str(row['Impact']) if pd.notna(row['Impact']) else 'Medium' - } - record['flops'] = parse_flops(record['flops_raw']) - record['primary_category'] = get_primary_category(record['category']) - records.append(record) - return records +import numpy as np +import pandas as pd +import plotly.graph_objects as go -CATEGORY_COLORS = { - 'Hardware': '#E67E22', - 'Theoretical Foundation': '#7F8C8D', - 'AI Milestone': '#16A085', - 'Model Release': '#8E44AD', - 'Model/Architecture': '#9B59B6', - 'Dataset': '#27AE60', - 'Robotics': '#E74C3C', - 'AI Winter': '#BDC3C7', - 'Infrastructure': '#8B4513', - 'Generative': '#FF69B4', - 'Reasoning/Agentic': '#1D8348', - 'Quantum/Future Speculative': '#9B59B6', - 'Speculative': '#9B59B6', - 'Other': '#3498DB' +STATUS_SYMBOLS = { + "observed": "circle", + "estimated": "circle-open", + "proxy": "square", + "projection": "triangle-up", + "speculative": "diamond", } -IMPACT_SIZES = { - 'Transformative': 22, - 'Speculative Transformative': 18, - 'High': 14, - 'Speculative High': 12, - 'Medium': 10, - 'Low': 8 +STATUS_COLORS = { + "observed": "#2563eb", + "estimated": "#7c3aed", + "proxy": "#64748b", + "projection": "#f59e0b", + "speculative": "#dc2626", } -def create_plotly_chart(records): - # Assign proxy values for records without FLOPs - for r in records: - if r['flops'] is None: - if r['year'] < 1945: - r['flops'] = 1e2 - elif r['year'] < 1960: - r['flops'] = 1e4 - elif r['year'] < 1980: - r['flops'] = 1e6 - elif r['year'] < 2000: - r['flops'] = 1e8 - elif r['year'] < 2010: - r['flops'] = 1e10 - else: - r['flops'] = 1e12 - - records.sort(key=lambda x: x['year']) +def load_data() -> pd.DataFrame: + script_dir = os.path.dirname(os.path.abspath(__file__)) + csv_path = os.path.join(os.path.dirname(script_dir), "data", "ai_milestones.csv") + df = pd.read_csv(csv_path) + for column in ("value_numeric", "value_low", "value_high"): + df[column] = pd.to_numeric(df[column], errors="coerce") + return df.sort_values("year") + + +def plotted_value(row: pd.Series) -> float: + if np.isfinite(row["value_numeric"]): + return row["value_numeric"] + year = row["year"] + if year < 1945: + return 1e2 + if year < 1960: + return 1e4 + if year < 1980: + return 1e6 + if year < 2000: + return 1e8 + if year < 2012: + return 1e10 + return 1e12 + + +def create_chart(df: pd.DataFrame) -> go.Figure: + df = df.copy() + df["plot_value"] = df.apply(plotted_value, axis=1) fig = go.Figure() - # Add era shading as shapes - eras = [ - (1900, 1940, 'rgba(200,200,200,0.2)', 'Mechanical & Theoretical'), - (1940, 1960, 'rgba(100,150,220,0.2)', 'Electronic Dawn'), - (1960, 2000, 'rgba(100,200,100,0.2)', "Moore's Law Scaling"), - (2000, 2012, 'rgba(255,200,100,0.2)', 'Parallel & Early Deep'), - (2012, 2022, 'rgba(180,150,220,0.2)', 'Deep Learning Big Bang'), - (2022, 2027, 'rgba(255,150,150,0.2)', 'Reasoning & Agentic Era') - ] - - for start, end, color, label in eras: - fig.add_vrect(x0=start, x1=end, fillcolor=color, line_width=0, - annotation_text=label, annotation_position="top", - annotation_font_size=10, annotation_font_color="gray") - - # Add connecting line - split pre/post 2010 to show discontinuity - pre_2010 = [(r['year'], r['flops']) for r in records if r['year'] < 2010] - post_2010 = [(r['year'], r['flops']) for r in records if r['year'] >= 2010] - - if pre_2010: - pre_years, pre_flops = zip(*pre_2010) - fig.add_trace(go.Scatter( - x=pre_years, y=pre_flops, mode='lines', - line=dict(color='rgba(50,50,50,0.25)', width=1, dash='dash'), - showlegend=False, hoverinfo='skip', name='Pre-2010 (proxy)' - )) - - if post_2010: - post_years, post_flops = zip(*post_2010) - fig.add_trace(go.Scatter( - x=post_years, y=post_flops, mode='lines', - line=dict(color='rgba(50,50,50,0.4)', width=1.5), - showlegend=False, hoverinfo='skip', name='Post-2010 (actual)' - )) - - # Group records by category for legend - categories = {} - for r in records: - cat = r['primary_category'] - if cat not in categories: - categories[cat] = [] - categories[cat].append(r) - - # Add scatter points by category - for cat, cat_records in categories.items(): - color = CATEGORY_COLORS.get(cat, '#3498DB') - - x_vals = [r['year'] for r in cat_records] - y_vals = [r['flops'] for r in cat_records] - sizes = [IMPACT_SIZES.get(r['impact'], 10) for r in cat_records] - symbols = ['diamond' if 'Speculative' in r['impact'] or r['year'] >= 2026 - else ('triangle-down' if 'Winter' in r['category'] else 'circle') - for r in cat_records] - - hover_texts = [ - f"{r['event'][:60]}...
    " + - f"Year: {int(r['year'])}
    " + - f"Category: {r['category']}
    " + - f"Compute: {r['flops_raw']}
    " + - f"Parameters: {r['parameters']}
    " + - f"Impact: {r['impact']}" - for r in cat_records + for start, end, label, color in [ + (1900, 2012, "historical foundations", "rgba(148,163,184,0.12)"), + (2012, 2022, "deep learning scaling", "rgba(124,58,237,0.10)"), + (2022, 2026, "current frontier", "rgba(59,130,246,0.10)"), + (2026, 2027, "speculative", "rgba(220,38,38,0.12)"), + ]: + fig.add_vrect(x0=start, x1=end, fillcolor=color, line_width=0, annotation_text=label, annotation_position="top") + + for status in ["observed", "estimated", "proxy", "projection", "speculative"]: + status_df = df[df["estimate_status"] == status] + if status_df.empty: + continue + visible = True if status != "speculative" else "legendonly" + hover = [ + f"{row.event}
    Year: {row.year}
    " + f"Value: {row.value_numeric if pd.notna(row.value_numeric) else 'context marker'}
    " + f"Unit: {row.value_unit}
    Status: {row.estimate_status}
    " + f"Confidence: {row.confidence}
    Source: {row.source_id or 'none'}
    {row.notes}" + for row in status_df.itertuples() ] - fig.add_trace(go.Scatter( - x=x_vals, y=y_vals, mode='markers', + x=status_df["year"], + y=status_df["plot_value"], + mode="markers", marker=dict( - size=sizes, color=color, - symbol=symbols[0] if len(set(symbols)) == 1 else symbols, - line=dict(width=1, color='white') + size=12, + color=STATUS_COLORS[status], + symbol=STATUS_SYMBOLS[status], + line=dict(width=1.2, color="white"), ), - name=cat, - text=hover_texts, - hoverinfo='text' + name=status.title(), + text=hover, + hoverinfo="text", + visible=visible, )) - # Add Moore's Law reference line - moore_years = np.linspace(1965, 2005, 100) - moore_start = 1e6 - moore_flops = moore_start * np.power(2, (moore_years - 1965) / 2) - fig.add_trace(go.Scatter( - x=moore_years, y=moore_flops, mode='lines', - line=dict(color='#E67E22', width=2, dash='dash'), - name="Moore's Law trajectory", - hoverinfo='skip' - )) - - # Add key event annotations - key_events = [ - (1936, 'Turing Machine'), (1945, 'ENIAC'), (1947, 'Transistor'), - (1956, 'AI Born'), (1965, "Moore's Law"), (1997, 'Deep Blue'), - (2007, 'CUDA'), (2009, 'ImageNet'), (2012, 'AlexNet'), - (2016, 'AlphaGo'), (2017, 'Transformers'), (2020, 'GPT-3'), - (2022, 'ChatGPT'), (2023, 'GPT-4'), (2025, 'Grok-3') - ] - - for r in records: - for yr, lbl in key_events: - if abs(r['year'] - yr) < 0.5 and lbl in r['event']: - fig.add_annotation( - x=r['year'], y=r['flops'], - text=lbl, showarrow=True, - arrowhead=0, arrowsize=0.5, arrowwidth=1, - arrowcolor='gray', - ax=30, ay=-40, - font=dict(size=9, color='#333'), - bgcolor='rgba(255,255,255,0.8)', - borderpad=2 + bounded = status_df.dropna(subset=["value_low", "value_high"]) + if not bounded.empty: + for _, row in bounded.iterrows(): + fig.add_shape( + type="line", + x0=row["year"], + x1=row["year"], + y0=row["value_low"], + y1=row["value_high"], + line=dict(color="rgba(30,41,59,0.25)", width=1), ) - break - # Layout configuration + label_df = df[df["display_label"].notna()].copy() + label_df = label_df[label_df["display_label"].astype(str).str.strip() != ""] + label_df = label_df.sort_values("plot_value", ascending=False).head(14) + for _, row in label_df.iterrows(): + fig.add_annotation( + x=row["year"], + y=row["plot_value"], + text=row["display_label"], + showarrow=True, + arrowhead=0, + arrowwidth=1, + arrowcolor="rgba(100,100,100,0.45)", + ax=20, + ay=-28, + font=dict(size=9), + bgcolor="rgba(255,255,255,0.85)", + ) + + fig.add_vline(x=2022, line_dash="dash", line_color="#64748b") + fig.add_vline(x=2026, line_dash="dash", line_color="#dc2626") + fig.update_layout( - title=dict( - text='History of Compute & Intelligence
    ' + - 'Training FLOPs for Key AI Milestones (1900–2026)', - x=0.5, font=dict(size=20) - ), - xaxis=dict( - title='Year', range=[1898, 2028], - tickmode='linear', tick0=1900, dtick=10, - gridcolor='rgba(128,128,128,0.2)', - minor=dict(tickmode='linear', tick0=1900, dtick=5) - ), - yaxis=dict( - title='Total Training Compute (FLOPs, log₁₀)', - type='log', range=[1, 29], - gridcolor='rgba(128,128,128,0.3)', - tickformat='.0e' - ), - legend=dict( - title='Category', - yanchor='top', y=0.99, xanchor='left', x=1.02, - bgcolor='rgba(255,255,255,0.9)' - ), - plot_bgcolor='#FAFAFA', - paper_bgcolor='white', - width=1400, height=800, - margin=dict(r=200, t=100, b=80), - hovermode='closest', - autosize=True + title="AI Compute Timeline
    Training FLOPs, proxies, estimates, and speculative projections separated structurally", + xaxis=dict(title="Year", range=[1898, 2028], dtick=10), + yaxis=dict(title="Value (log scale; see hover for unit)", type="log", range=[1, 29]), + legend=dict(orientation="h", y=1.03, x=0.5, xanchor="center", yanchor="bottom"), + plot_bgcolor="#FAFAFA", + paper_bgcolor="white", + hovermode="closest", + autosize=True, + margin=dict(t=120, b=90, l=80, r=40), ) - - # Add frontier cluster bracket annotation fig.add_annotation( - x=2022.5, y=3e25, - text="2023–25 Frontier Cluster
    (10²⁴–10²⁶ FLOPs)", + text="Footnote: speculative rows are hidden by default in the legend. Toggle 'Speculative' to show future projections. Ops/sec proxies are not training FLOPs.", + xref="paper", + yref="paper", + x=0.5, + y=-0.14, showarrow=False, - font=dict(size=10, color='#444'), - bgcolor='rgba(255,255,255,0.85)', - borderpad=4, - xanchor='right' + font=dict(size=10, color="#666"), ) - - # Add note - fig.add_annotation( - text="Log scale: exponential growth appears as straight lines.
    " + - "Pre-2010 values are rough proxies (ops/sec, not directly comparable).
    " + - "Speculative 2026+ points marked with diamonds.
    " + - "Sources: Epoch AI, Our World in Data. Estimates as of Jan 2026.", - xref='paper', yref='paper', x=0.01, y=0.01, - showarrow=False, font=dict(size=9, color='#666'), - bgcolor='rgba(255,255,255,0.9)', borderpad=5, - align='left' - ) - return fig -def main(): - if not PLOTLY_AVAILABLE: - print("Please install plotly: pip install plotly") - return - - # Determine output directory +def main() -> None: script_dir = os.path.dirname(os.path.abspath(__file__)) - output_dir = os.path.join(os.path.dirname(script_dir), 'output') + output_dir = os.path.join(os.path.dirname(script_dir), "output") os.makedirs(output_dir, exist_ok=True) - - # Load and parse data - df = load_data() - records = parse_data(df) - print(f"Parsed {len(records)} records") - - fig = create_plotly_chart(records) - - # Save as interactive HTML - html_path = os.path.join(output_dir, 'ai_compute_timeline_interactive.html') - fig.write_html(html_path, include_plotlyjs="cdn") - print(f"Saved: {html_path}") - - # Save as static image (requires kaleido) - try: - png_path = os.path.join(output_dir, 'ai_compute_timeline_plotly.png') - fig.write_image(png_path, scale=2) - print(f"Saved: {png_path}") - except Exception: - print("Note: Static image export requires kaleido: pip install kaleido") - - print("\nDone!") + fig = create_chart(load_data()) + fig.write_html(os.path.join(output_dir, "ai_compute_timeline_interactive.html"), include_plotlyjs="cdn") + print("Saved ai_compute_timeline_interactive.html") -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/build_all.py b/build_all.py index 11aab55..d169e71 100644 --- a/build_all.py +++ b/build_all.py @@ -4,22 +4,11 @@ Regenerates all static and interactive visualizations from source. """ -import os import sys import subprocess from pathlib import Path -# Plot directories (order matters for dependencies) -PLOT_DIRS = [ - "ai-compute-timeline", - "adoption-timeline", - "energetic-scaling", - "civilization-scaling", - "energy-leverage-per-person", - "model-sizes", - "ai-benchmark-progress", - "cost-to-train", -] +from scripts.manifest_utils import plot_entries def run_script(script_path: Path, plot_name: str) -> bool: @@ -83,7 +72,8 @@ def main(): total_success = 0 total_failed = 0 - for plot_name in PLOT_DIRS: + for entry in plot_entries(root, published_only=True): + plot_name = entry["id"] plot_dir = root / plot_name if not plot_dir.exists(): diff --git a/civilization-scaling/data/meta.json b/civilization-scaling/data/meta.json index f331843..e11044b 100644 --- a/civilization-scaling/data/meta.json +++ b/civilization-scaling/data/meta.json @@ -1,6 +1,6 @@ { "title": "Scaling Civilization", - "description": "Multi-lane log-time timeline showing Energy, Coordination, Memory, and Replication metrics from 1M years ago to 2030+.", + "description": "Multi-lane log-time timeline showing Energy, Coordination, Memory, Replication, and Latency metrics from 1M years ago to 2030+.", "fields": { "Lane": "Metric lane (Energy, Coordination, Memory, Replication, Latency)", "Era": "Historical era (Prehistory, Neolithic, Ancient, Pre-Modern, Industrial, Modern, Digital, Future)", @@ -38,7 +38,21 @@ "notes": "Technology acceleration patterns" } ], - "transformations": "Log x-axis: -log10(years_ago + 1) to compress prehistory and expand modern era. Log y-axis for each lane.", + "lane_glossary": { + "Energy": "Energy available per person; higher is more leverage.", + "Coordination": "Approximate size of stable coordination group; higher is broader coordination.", + "Memory": "Externalized information storage per person; higher is more cumulative memory.", + "Replication": "Approximate time to copy an artifact or message; lower is faster replication.", + "Latency": "Approximate communication delay; lower is faster coordination." + }, + "metric_direction": { + "Energy": "up", + "Coordination": "up", + "Memory": "up", + "Replication": "down", + "Latency": "down" + }, + "transformations": "Log x-axis: -log10(years_ago + 1) to compress prehistory and expand modern era. Log y-axis for each lane. Replication and Latency improve as their values decrease.", "created": "2026-01", "author": "mschwar" } diff --git a/civilization-scaling/index.html b/civilization-scaling/index.html index 12700f0..8a5e068 100644 --- a/civilization-scaling/index.html +++ b/civilization-scaling/index.html @@ -3,13 +3,14 @@ - Scaling Civilization – Energy, Coordination, Memory, Replication + Scaling Civilization – Energy, Coordination, Memory, Replication, Latency @@ -18,9 +19,9 @@

    Scaling Civilization

    -

    Energy, Coordination, Memory, Replication over Log-Time (1M Years → 2030+)

    +

    Energy, Coordination, Memory, Replication, and Latency over Log-Time (1M Years → 2030+)

    - + Civilization Scaling @@ -37,7 +38,7 @@

    Scaling Civilization

    -

    The Four Lanes

    +

    The Five Lanes

    @@ -47,11 +48,15 @@

    The Four Lanes

    +
    LaneMetricTrendKey Flips
    CoordinationMax groupLanguage → Writing → Print → Internet
    Memorybits/personSymbols → Writing → Print → AI
    Replicationhrs/copyScribes → Print → Digital → AI Gen
    Latencyms delayWalking → Telegraph → Fiber → Edge

    About

    Multi-lane log-time timeline. Log x-axis compresses prehistory and expands modern acceleration. Phase flips mark where multiple lanes transition together.

    + +

    How to Read This Chart

    +

    Energy, Coordination, and Memory improve upward. Replication time and Latency improve downward, so lower values indicate faster copying or communication. The lanes are not naturally comparable units; compare phase changes and direction, not raw magnitudes across lanes.

    diff --git a/civilization-scaling/output/civilization_scaling.png b/civilization-scaling/output/civilization_scaling.png index a1fbcb6..c42270f 100644 Binary files a/civilization-scaling/output/civilization_scaling.png and b/civilization-scaling/output/civilization_scaling.png differ diff --git a/civilization-scaling/output/civilization_scaling.svg b/civilization-scaling/output/civilization_scaling.svg index 8f6e529..2348883 100644 --- a/civilization-scaling/output/civilization_scaling.svg +++ b/civilization-scaling/output/civilization_scaling.svg @@ -6,11 +6,11 @@ - 2026-04-24T11:58:39.416348 + 2026-04-24T12:52:10.837694 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -43,7 +43,7 @@ L 461.719457 320.800357 L 461.719457 138.260476 L 73.939658 138.260476 z -" clip-path="url(#p343a879982)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #aed6f1; opacity: 0.3; stroke: #aed6f1; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #f9e79f; opacity: 0.3; stroke: #f9e79f; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #fadbd8; opacity: 0.3; stroke: #fadbd8; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #fad7a0; opacity: 0.3; stroke: #fad7a0; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #d5f5e3; opacity: 0.3; stroke: #d5f5e3; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: #e8daef; opacity: 0.3; stroke: #e8daef; stroke-linejoin: miter"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke: #e67e22; stroke-opacity: 0.4; stroke-width: 1.5; stroke-linecap: square"/> - - + - + - + - + - + - + - + @@ -173,16 +173,16 @@ L 0 3.5 +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -267,11 +267,11 @@ z +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -308,11 +308,11 @@ z +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -355,11 +355,11 @@ z +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -407,11 +407,11 @@ z +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -438,271 +438,271 @@ z - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -1087,42 +1087,42 @@ z +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p3357cef3b1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> - - - + + - - - + + - - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - - + + @@ -2353,7 +2353,7 @@ L 461.719457 517.640238 L 461.719457 335.100357 L 73.939658 335.100357 z -" clip-path="url(#pdb00eea91f)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #aed6f1; opacity: 0.3; stroke: #aed6f1; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #f9e79f; opacity: 0.3; stroke: #f9e79f; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #fadbd8; opacity: 0.3; stroke: #fadbd8; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #fad7a0; opacity: 0.3; stroke: #fad7a0; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #d5f5e3; opacity: 0.3; stroke: #d5f5e3; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: #e8daef; opacity: 0.3; stroke: #e8daef; stroke-linejoin: miter"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #27ae60; stroke-opacity: 0.4; stroke-width: 1.5; stroke-linecap: square"/> - + - + - + - + - + - + - + @@ -2476,11 +2476,11 @@ L 1285.257627 343.397625 +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2522,11 +2522,11 @@ z +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2542,11 +2542,11 @@ L 1280.979798 479.350616 +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2562,11 +2562,11 @@ L 1280.979798 459.656104 +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2582,11 +2582,11 @@ L 1280.979798 439.961592 +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2602,11 +2602,11 @@ L 1280.979798 420.267079 +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2622,11 +2622,11 @@ L 1280.979798 400.572567 +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2683,11 +2683,11 @@ z +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2735,11 +2735,11 @@ z +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -2755,511 +2755,511 @@ L 1280.979798 341.489029 - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -3519,42 +3519,42 @@ z +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5d2936fed1)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> - - - + + - - - + + - - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - - + + @@ -3918,7 +3918,7 @@ L 461.719457 714.480119 L 461.719457 531.940238 L 73.939658 531.940238 z -" clip-path="url(#p45c7cbbff4)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #aed6f1; opacity: 0.3; stroke: #aed6f1; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #f9e79f; opacity: 0.3; stroke: #f9e79f; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #fadbd8; opacity: 0.3; stroke: #fadbd8; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #fad7a0; opacity: 0.3; stroke: #fad7a0; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #d5f5e3; opacity: 0.3; stroke: #d5f5e3; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: #e8daef; opacity: 0.3; stroke: #e8daef; stroke-linejoin: miter"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #3498db; stroke-opacity: 0.4; stroke-width: 1.5; stroke-linecap: square"/> - + - + - + - + - + - + - + @@ -4041,11 +4041,11 @@ L 1285.257627 540.237506 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4061,11 +4061,11 @@ L 1280.979798 695.811268 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4081,11 +4081,11 @@ L 1280.979798 675.068099 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4101,11 +4101,11 @@ L 1280.979798 654.324931 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4121,11 +4121,11 @@ L 1280.979798 633.581763 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4142,11 +4142,11 @@ L 1280.979798 612.838595 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4163,11 +4163,11 @@ L 1280.979798 592.095426 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4184,11 +4184,11 @@ L 1280.979798 571.352258 +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4279,42 +4279,42 @@ z +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p5fdc8d703d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> - - - + + - - - + + - - - + + - - + + - - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - - + + @@ -4722,7 +4722,7 @@ L 461.719457 911.32 L 461.719457 728.780119 L 73.939658 728.780119 z -" clip-path="url(#pd781c805fb)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #d5d8dc; opacity: 0.3; stroke: #d5d8dc; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #aed6f1; opacity: 0.3; stroke: #aed6f1; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #f9e79f; opacity: 0.3; stroke: #f9e79f; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #fadbd8; opacity: 0.3; stroke: #fadbd8; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #fad7a0; opacity: 0.3; stroke: #fad7a0; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #d5f5e3; opacity: 0.3; stroke: #d5f5e3; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: #e8daef; opacity: 0.3; stroke: #e8daef; stroke-linejoin: miter"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #9b59b6; stroke-opacity: 0.4; stroke-width: 1.5; stroke-linecap: square"/> - + @@ -4805,7 +4805,7 @@ L 1285.257627 903.022733 - + @@ -4837,7 +4837,7 @@ z - + @@ -4852,7 +4852,7 @@ z - + @@ -4866,7 +4866,7 @@ z - + @@ -4881,7 +4881,7 @@ z - + @@ -4895,7 +4895,7 @@ z - + @@ -5109,11 +5109,11 @@ z +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5139,11 +5139,11 @@ z +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5160,11 +5160,11 @@ L 1280.979798 864.727653 +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5181,11 +5181,11 @@ L 1280.979798 839.1976 +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5202,11 +5202,11 @@ L 1280.979798 813.667546 +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5223,11 +5223,11 @@ L 1280.979798 788.137493 +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5243,11 +5243,11 @@ L 1280.979798 762.60744 +" clip-path="url(#p51d12e362d)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5331,42 +5331,42 @@ z +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> +" clip-path="url(#p51d12e362d)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #888888; stroke-opacity: 0.5"/> - - - + + - - - + + - - - + + - - + + - - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - - + + @@ -7709,16 +7709,16 @@ L 1151.067298 94.88875 - + - + - + - + diff --git a/civilization-scaling/output/civilization_scaling_highres.png b/civilization-scaling/output/civilization_scaling_highres.png index 0856448..27fcae5 100644 Binary files a/civilization-scaling/output/civilization_scaling_highres.png and b/civilization-scaling/output/civilization_scaling_highres.png differ diff --git a/civilization-scaling/output/civilization_scaling_interactive.html b/civilization-scaling/output/civilization_scaling_interactive.html index c4d6370..929dbe8 100644 --- a/civilization-scaling/output/civilization_scaling_interactive.html +++ b/civilization-scaling/output/civilization_scaling_interactive.html @@ -1,7 +1,7 @@ -
    -
    +
    +
    \ No newline at end of file diff --git a/cost-to-train/index.html b/cost-to-train/index.html index 0cd88c3..8c3e8bf 100644 --- a/cost-to-train/index.html +++ b/cost-to-train/index.html @@ -9,12 +9,42 @@

    Cost-to-Train Frontier

    Training cost vs. capability — the efficiency paradox.

    + + + Cost to Train chart showing frontier model training cost and capability over time + + + +
    + Key insight: Frontier training FLOPs rise while dollars per FLOP collapse, creating the efficiency paradox. +
    + +
    +

    How to Read This Chart

    +

    Compare cost, FLOPs, and capability together. Estimated cost rows are useful for trend direction, but should be audited against source notes before making precise claims.

    +
    + +
    + Related: + AI Compute Timeline | + Model Sizes +
    + + + diff --git a/cost-to-train/output/cost_to_train.svg b/cost-to-train/output/cost_to_train.svg index 96e1bd0..dd7a3bb 100644 --- a/cost-to-train/output/cost_to_train.svg +++ b/cost-to-train/output/cost_to_train.svg @@ -6,11 +6,11 @@ - 2026-04-24T11:58:54.841364 + 2026-04-24T12:53:41.992680 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -42,16 +42,16 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -129,11 +129,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -171,11 +171,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -224,11 +224,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -286,11 +286,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -307,11 +307,11 @@ L 558.857211 43.04 +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -328,11 +328,11 @@ L 674.543502 43.04 +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -349,11 +349,11 @@ L 790.229793 43.04 +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -470,16 +470,16 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -528,11 +528,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -549,11 +549,11 @@ L 946.406286 354.628717 +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -604,11 +604,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -652,11 +652,11 @@ z +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #374151; stroke-opacity: 0.15; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -967,9 +967,9 @@ L 790.229793 161.727148 L 848.072939 113.501756 L 848.072939 98.984466 L 905.916085 65.276364 -" clip-path="url(#pfadc9d56cb)" style="fill: none; stroke: #60a5fa; stroke-width: 3; stroke-linecap: square"/> +" clip-path="url(#p9553ecb661)" style="fill: none; stroke: #60a5fa; stroke-width: 3; stroke-linecap: square"/> - - - - - - - - - - - - + + + + + + + + + + + @@ -1786,7 +1786,7 @@ L 74.621844 56.138438 L 84.621844 56.138438 " style="fill: none; stroke: #60a5fa; stroke-width: 3; stroke-linecap: square"/> - + @@ -1814,7 +1814,7 @@ L 74.621844 70.816563 L 84.621844 70.816563 " style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #34d399; stroke-width: 2"/> - - + @@ -1929,12 +1929,12 @@ z - - + @@ -1960,7 +1960,7 @@ z - + @@ -1977,7 +1977,7 @@ z - + @@ -1994,7 +1994,7 @@ z - + @@ -2011,236 +2011,236 @@ z - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -2316,24 +2316,24 @@ L 790.229793 452.639334 L 848.072939 473.810791 L 848.072939 462.159297 L 905.916085 510.003636 -" clip-path="url(#pfadc9d56cb)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #34d399; stroke-width: 2"/> - - - - - - - - - - - +" clip-path="url(#p9553ecb661)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #34d399; stroke-width: 2"/> + + + + + + + + + + + - + diff --git a/cost-to-train/output/cost_to_train_highres.png b/cost-to-train/output/cost_to_train_highres.png index fe8450f..514dc0b 100644 Binary files a/cost-to-train/output/cost_to_train_highres.png and b/cost-to-train/output/cost_to_train_highres.png differ diff --git a/cost-to-train/output/cost_to_train_interactive.html b/cost-to-train/output/cost_to_train_interactive.html index 78b19ff..7b84b48 100644 --- a/cost-to-train/output/cost_to_train_interactive.html +++ b/cost-to-train/output/cost_to_train_interactive.html @@ -1,7 +1,7 @@ -
    -
    +
    +
    \ No newline at end of file diff --git a/dashboard/dashboard.js b/dashboard/dashboard.js index 3c843f0..4af101c 100644 --- a/dashboard/dashboard.js +++ b/dashboard/dashboard.js @@ -1,43 +1,57 @@ /** - * Unified Dashboard — Singularity View - * Multi-lane timeline showing all exponential progress plots. + * Unified Dashboard — manifest-driven synchronized atlas view. */ -const LANES = [ - { id: 'energy', name: 'Energy Leverage', color: '#f59e0b', csv: '../energy-leverage-per-person/data/energy_leverage_datapoints.csv' }, - { id: 'compute', name: 'AI Compute (FLOPs)', color: '#60a5fa', csv: '../ai-compute-timeline/data/ai_milestones.csv' }, - { id: 'models', name: 'LLM Model Sizes', color: '#a78bfa', csv: '../model-sizes/data/llm_model_sizes.csv' }, - { id: 'benchmarks', name: 'AI Benchmarks', color: '#34d399', csv: '../ai-benchmark-progress/data/benchmark_data.csv' }, - { id: 'adoption', name: 'Tech Adoption Speed', color: '#f472b6', csv: '../adoption-timeline/data/tech_adoption.csv' }, - { id: 'civilization', name: 'Civilization Phases', color: '#e8eaf6', csv: '../civilization-scaling/data/civilization_metrics.csv' }, +const COLORS = [ + '#60a5fa', '#f472b6', '#34d399', '#e8eaf6', '#f59e0b', + '#a78bfa', '#22c55e', '#fb7185' ]; -const MARGIN = { top: 20, right: 40, bottom: 40, left: 120 }; +const MARGIN = { top: 20, right: 40, bottom: 40, left: 150 }; const LANE_HEIGHT = 80; const WIDTH = 1400 - MARGIN.left - MARGIN.right; const TIME_DOMAIN = [-1000000, 2030]; -async function loadCSV(url) { +async function loadText(url) { const response = await fetch(url); - const text = await response.text(); - return d3.csvParse(text); + if (!response.ok) { + throw new Error(`${url}: ${response.status}`); + } + return response.text(); +} + +async function loadManifest() { + const text = await loadText('../plots_manifest.json'); + return JSON.parse(text) + .filter(entry => entry.status === 'published' && entry.kind !== 'dashboard') + .sort((a, b) => a.order - b.order); +} + +async function loadCSV(url) { + return d3.csvParse(await loadText(url)); +} + +function parseYear(row) { + const raw = row.year || row.Year || row.Years_Ago || row.date; + if (!raw) return null; + if (row.Years_Ago) return 2026 - parseFloat(raw); + const value = parseFloat(String(raw).slice(0, 4)); + return Number.isFinite(value) ? value : null; } -function parseYear(value) { - if (!value) return null; - const num = parseFloat(value); - if (isNaN(num)) return null; - return num; +function eventName(row) { + return row.event || row.Event || row.Model || row.name || row.label || row.Benchmark || 'Event'; } function initDashboard() { const container = d3.select('#dashboard-container'); const tooltip = d3.select('body').append('div').attr('class', 'tooltip'); - const totalHeight = LANES.length * (LANE_HEIGHT + 20) + MARGIN.top + MARGIN.bottom; + const totalHeight = 8 * (LANE_HEIGHT + 20) + MARGIN.top + MARGIN.bottom; const svg = container.append('svg') - .attr('width', WIDTH + MARGIN.left + MARGIN.right) - .attr('height', totalHeight); + .attr('viewBox', `0 0 ${WIDTH + MARGIN.left + MARGIN.right} ${totalHeight}`) + .attr('role', 'img') + .attr('aria-label', 'Unified dashboard of published Exponential Progress Atlas timelines'); const g = svg.append('g') .attr('transform', `translate(${MARGIN.left},${MARGIN.top})`); @@ -47,14 +61,23 @@ function initDashboard() { .range([0, WIDTH]) .constant(1000); - Promise.all(LANES.map(lane => loadCSV(lane.csv).then(data => ({ ...lane, data })))) + loadManifest() + .then(entries => Promise.all(entries.map((entry, index) => ( + loadCSV(`../${entry.data}`).then(data => ({ + id: entry.id, + name: entry.short_title, + title: entry.title, + color: COLORS[index % COLORS.length], + data, + })) + )))) .then(lanesWithData => { renderLanes(g, lanesWithData, xScale, tooltip); }) .catch(err => { console.error('Failed to load dashboard data:', err); - container.append('p').style('color', '#ef4444') - .text('Error loading dashboard data. Ensure CSV files are accessible.'); + container.append('p').attr('class', 'error') + .text('Error loading dashboard data. Ensure manifest and CSV files are accessible.'); }); } @@ -80,10 +103,10 @@ function renderLanes(g, lanes, xScale, tooltip) { const events = lane.data .map(d => { - const year = parseYear(d.Year || d.year || d.date); - return year ? { ...d, year } : null; + const year = parseYear(d); + return year === null ? null : { ...d, year }; }) - .filter(d => d !== null) + .filter(Boolean) .sort((a, b) => a.year - b.year); laneG.selectAll('.event-dot') @@ -93,15 +116,15 @@ function renderLanes(g, lanes, xScale, tooltip) { .attr('class', 'event-dot') .attr('cx', d => xScale(d.year)) .attr('cy', LANE_HEIGHT / 2) - .attr('r', 4) + .attr('r', d => (String(d.estimate_status || d.Impact || '').toLowerCase().includes('speculative') ? 5 : 4)) .attr('fill', lane.color) - .attr('opacity', 0.8) + .attr('opacity', d => (String(d.estimate_status || d.Impact || '').toLowerCase().includes('speculative') ? 0.55 : 0.85)) .on('mouseover', function(event, d) { d3.select(this).attr('r', 7).attr('opacity', 1); tooltip.style('opacity', 1) .style('left', (event.pageX + 10) + 'px') .style('top', (event.pageY - 10) + 'px') - .html(`${d.Event || d.Model || d.name || 'Event'}
    Year: ${d.year}`); + .html(`${eventName(d)}
    ${lane.title}
    Year: ${d.year}`); }) .on('mouseout', function() { d3.select(this).attr('r', 4).attr('opacity', 0.8); @@ -125,7 +148,7 @@ function renderLanes(g, lanes, xScale, tooltip) { .attr('text-anchor', 'middle') .attr('fill', '#9ca3af') .attr('font-size', '13px') - .text('Time (years, log scale) →'); + .text('Time (years, symlog scale) ->'); } if (document.readyState === 'loading') { diff --git a/dashboard/index.html b/dashboard/index.html index fd721e0..cd94c92 100644 --- a/dashboard/index.html +++ b/dashboard/index.html @@ -4,12 +4,13 @@ Unified Dashboard — Exponential Progress +

    Unified Dashboard

    -

    All exponential progress timelines on one synchronized view.

    +

    Published Exponential Progress Atlas entries on one synchronized manifest-driven view.

    diff --git a/energetic-scaling/data/ai_training_flops.csv b/energetic-scaling/data/ai_training_flops.csv new file mode 100644 index 0000000..0221909 --- /dev/null +++ b/energetic-scaling/data/ai_training_flops.csv @@ -0,0 +1,4 @@ +year,system,training_flops,impact,source_id,notes +2012,AlexNet,6e17,High,epoch,Deep learning breakthrough. +2020,GPT-3,3.14e23,High,epoch,Scaling era. +2026,Grok-4 estimated,5e26,Transformative,source_review_needed,Frontier estimate retained for comparison; needs source review. diff --git a/energetic-scaling/data/biology_neural_scaling.csv b/energetic-scaling/data/biology_neural_scaling.csv new file mode 100644 index 0000000..779aea2 --- /dev/null +++ b/energetic-scaling/data/biology_neural_scaling.csv @@ -0,0 +1,8 @@ +group,entity,body_mass_kg,neurons_total,neurons_per_kg,impact,source_id,notes +Reptiles,Crocodile (90kg example),90,8.3e7,9.22e5,Low,herculano_houzel,Low neuron density; about 20x fewer than birds/mammals same size. +Birds,Goldcrest (smallest bird),0.0045,1.64e8,3.64e10,High,herculano_houzel,High density outlier in small birds. +Birds,Corvid/Rook (example),0.5,2e9,4e9,High,herculano_houzel,Primate-like neuron counts in forebrain. +Mammals,Mouse,0.02,7.1e7,3.55e9,Medium,herculano_houzel,Rodent baseline. +Mammals,Elephant,4000,2.57e11,6.43e7,High,herculano_houzel,Large absolute count but low neurons per kg. +Primates,Human,70,8.6e10,1.23e9,Transformative,herculano_houzel,Outlier EQ about 7; 86B neurons. +Primates,Marmoset (small primate),0.3,1.4e9,4.67e9,High,herculano_houzel,Linear scaling in primates. diff --git a/energetic-scaling/data/foraging_lht.csv b/energetic-scaling/data/foraging_lht.csv new file mode 100644 index 0000000..87957e2 --- /dev/null +++ b/energetic-scaling/data/foraging_lht.csv @@ -0,0 +1,2 @@ +group,population,age_years,kcal_per_day,impact,source_id,notes +Foragers,Ache/Tsimane peak midlife male,45,9240,High,kaplan_charnov,Skill-based foraging funds brain and offspring; LHT/OFT anchor. diff --git a/energetic-scaling/data/hardware_efficiency.csv b/energetic-scaling/data/hardware_efficiency.csv new file mode 100644 index 0000000..fc959f3 --- /dev/null +++ b/energetic-scaling/data/hardware_efficiency.csv @@ -0,0 +1,5 @@ +year,system,cps_per_dollar,impact,source_id,notes +1939,Zeus II,6.5e-6,Low,kurzweil,Early baseline. +1945,ENIAC,1e-4,Medium,kurzweil,Vacuum tube era. +1971,Intel 4004,10,High,kurzweil,Microprocessor start. +2024,NVIDIA B200,5e11,Transformative,kurzweil,Half trillion cps per dollar order-of-magnitude anchor. diff --git a/energetic-scaling/data/meta.json b/energetic-scaling/data/meta.json index 2b312a4..c781cca 100644 --- a/energetic-scaling/data/meta.json +++ b/energetic-scaling/data/meta.json @@ -1,6 +1,6 @@ { "title": "Energetic Scaling: Biology vs. Technology", - "description": "Dual-panel comparison: biological neural efficiency (neurons/kg vs body mass) and technological compute efficiency (cps/$ vs time).", + "description": "Dual-panel comparison: biological neural efficiency (neurons/kg vs body mass) and technological compute efficiency (cps/$ vs time). Clean source contracts are split into biology_neural_scaling.csv, hardware_efficiency.csv, ai_training_flops.csv, and foraging_lht.csv.", "fields": { "Category": "Panel category (Biology, Tech, LHT)", "Group": "Sub-group within category", @@ -14,23 +14,33 @@ "sources": [ { "name": "Herculano-Houzel (2009)", + "id": "herculano_houzel", "url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2776484/", "accessed": "2026-01", "notes": "Neuronal scaling rules, isotropic fractionator method" }, { "name": "Kurzweil (2005, 2012)", + "id": "kurzweil", "url": "https://www.singularity.com/", "accessed": "2026-01", "notes": "Compute price-performance trends (75 quadrillion-fold increase)" }, { "name": "Kleiber's Law", + "id": "kleiber", "url": "https://en.wikipedia.org/wiki/Kleiber%27s_law", "accessed": "2026-01", "notes": "Metabolic scaling (0.75 exponent)" } ], + "split_data_files": [ + "biology_neural_scaling.csv", + "hardware_efficiency.csv", + "ai_training_flops.csv", + "foraging_lht.csv" + ], + "thesis": "Scaling laws create outliers when a system crosses a representational or energetic threshold.", "transformations": "Both panels log-log. Biology: neurons/kg vs body mass. Tech: cps/$ vs year.", "created": "2026-01", "author": "mschwar" diff --git a/energetic-scaling/index.html b/energetic-scaling/index.html index 1f26f74..5992243 100644 --- a/energetic-scaling/index.html +++ b/energetic-scaling/index.html @@ -14,7 +14,7 @@

    Energetic Scaling

    Biology vs. Technology – Neural efficiency vs. compute efficiency

    - + Energetic Scaling @@ -44,6 +44,10 @@

    Dual Panels

    About

    Dual-panel log-log comparison showing fundamental scaling rules. Endotherms have ~20× more neurons than reptiles; compute efficiency doubles roughly every year.

    + +

    How to Read This Chart

    +

    The comparison is conceptual, not a shared-unit chart. Biology, hardware efficiency, AI training compute, and foraging energetics now have separate source CSVs in data/, while the chart keeps a comparison-level CSV for display.

    +

    Thesis: Scaling laws create outliers when a system crosses a representational or energetic threshold.

    diff --git a/energetic-scaling/output/energetic_scaling.png b/energetic-scaling/output/energetic_scaling.png index e44801a..8fcfd5e 100644 Binary files a/energetic-scaling/output/energetic_scaling.png and b/energetic-scaling/output/energetic_scaling.png differ diff --git a/energetic-scaling/output/energetic_scaling.svg b/energetic-scaling/output/energetic_scaling.svg index 3fe0dc0..e6967bc 100644 --- a/energetic-scaling/output/energetic_scaling.svg +++ b/energetic-scaling/output/energetic_scaling.svg @@ -6,11 +6,11 @@ - 2026-04-24T11:58:36.464500 + 2026-04-24T12:52:07.249269 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -43,23 +43,23 @@ L 593.192238 257.973333 L 593.192238 107.52 L 52.983125 107.52 z -" clip-path="url(#pe3dfbcd688)" style="fill: #e74c3c; opacity: 0.1; stroke: #e74c3c; stroke-linejoin: miter"/> +" clip-path="url(#p4a2506dc74)" style="fill: #e74c3c; opacity: 0.1; stroke: #e74c3c; stroke-linejoin: miter"/> +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -152,11 +152,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -199,11 +199,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -220,11 +220,11 @@ L 207.328586 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -240,11 +240,11 @@ L 284.501316 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -260,11 +260,11 @@ L 361.674047 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -280,11 +280,11 @@ L 438.846777 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -300,11 +300,11 @@ L 516.019507 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -341,16 +341,16 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - - + @@ -358,11 +358,11 @@ L 0 2 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -370,11 +370,11 @@ L 89.803875 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -382,11 +382,11 @@ L 99.445738 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -394,11 +394,11 @@ L 106.924549 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -406,11 +406,11 @@ L 113.035182 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -418,11 +418,11 @@ L 118.201648 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -430,11 +430,11 @@ L 122.677045 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -442,11 +442,11 @@ L 126.624625 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -454,11 +454,11 @@ L 153.387162 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -466,11 +466,11 @@ L 166.976605 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -478,11 +478,11 @@ L 176.618469 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -490,11 +490,11 @@ L 184.097279 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -502,11 +502,11 @@ L 190.207912 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -514,11 +514,11 @@ L 195.374379 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -526,11 +526,11 @@ L 199.849776 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -538,11 +538,11 @@ L 203.797355 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -550,11 +550,11 @@ L 230.559893 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -562,11 +562,11 @@ L 244.149336 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -574,11 +574,11 @@ L 253.791199 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -586,11 +586,11 @@ L 261.27001 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -598,11 +598,11 @@ L 267.380642 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -610,11 +610,11 @@ L 272.547109 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -622,11 +622,11 @@ L 277.022506 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -634,11 +634,11 @@ L 280.970086 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -646,11 +646,11 @@ L 307.732623 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -658,11 +658,11 @@ L 321.322066 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -670,11 +670,11 @@ L 330.96393 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -682,11 +682,11 @@ L 338.44274 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -694,11 +694,11 @@ L 344.553373 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -706,11 +706,11 @@ L 349.719839 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -718,11 +718,11 @@ L 354.195236 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -730,11 +730,11 @@ L 358.142816 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -742,11 +742,11 @@ L 384.905353 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -754,11 +754,11 @@ L 398.494797 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -766,11 +766,11 @@ L 408.13666 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -778,11 +778,11 @@ L 415.61547 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -790,11 +790,11 @@ L 421.726103 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -802,11 +802,11 @@ L 426.89257 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -814,11 +814,11 @@ L 431.367967 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -826,11 +826,11 @@ L 435.315547 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -838,11 +838,11 @@ L 462.078084 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -850,11 +850,11 @@ L 475.667527 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -862,11 +862,11 @@ L 485.30939 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -874,11 +874,11 @@ L 492.788201 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -886,11 +886,11 @@ L 498.898834 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -898,11 +898,11 @@ L 504.0653 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -910,11 +910,11 @@ L 508.540697 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -922,11 +922,11 @@ L 512.488277 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -934,11 +934,11 @@ L 539.250814 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -946,11 +946,11 @@ L 552.840257 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -958,11 +958,11 @@ L 562.482121 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -970,11 +970,11 @@ L 569.960931 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -982,11 +982,11 @@ L 576.071564 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -994,11 +994,11 @@ L 581.238031 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1006,11 +1006,11 @@ L 585.713428 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1292,16 +1292,16 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -1344,11 +1344,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1396,11 +1396,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1428,11 +1428,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1489,11 +1489,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1541,11 +1541,11 @@ z +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1562,11 +1562,11 @@ L 593.192238 182.746667 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -1583,16 +1583,16 @@ L 593.192238 107.52 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - - + @@ -1600,11 +1600,11 @@ L -2 0 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1612,11 +1612,11 @@ L 593.192238 522.987758 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1624,11 +1624,11 @@ L 593.192238 513.589034 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1636,11 +1636,11 @@ L 593.192238 506.298816 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1648,11 +1648,11 @@ L 593.192238 500.342275 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1660,11 +1660,11 @@ L 593.192238 495.306091 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1672,11 +1672,11 @@ L 593.192238 490.943551 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1684,11 +1684,11 @@ L 593.192238 487.095517 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1696,11 +1696,11 @@ L 593.192238 461.00785 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1708,11 +1708,11 @@ L 593.192238 447.761092 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1720,11 +1720,11 @@ L 593.192238 438.362367 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1732,11 +1732,11 @@ L 593.192238 431.07215 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1744,11 +1744,11 @@ L 593.192238 425.115609 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1756,11 +1756,11 @@ L 593.192238 420.079425 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1768,11 +1768,11 @@ L 593.192238 415.716884 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1780,11 +1780,11 @@ L 593.192238 411.86885 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1792,11 +1792,11 @@ L 593.192238 385.781184 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1804,11 +1804,11 @@ L 593.192238 372.534425 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1816,11 +1816,11 @@ L 593.192238 363.1357 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1828,11 +1828,11 @@ L 593.192238 355.845483 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1840,11 +1840,11 @@ L 593.192238 349.888942 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1852,11 +1852,11 @@ L 593.192238 344.852758 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1864,11 +1864,11 @@ L 593.192238 340.490217 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1876,11 +1876,11 @@ L 593.192238 336.642183 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1888,11 +1888,11 @@ L 593.192238 310.554517 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1900,11 +1900,11 @@ L 593.192238 297.307758 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1912,11 +1912,11 @@ L 593.192238 287.909034 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1924,11 +1924,11 @@ L 593.192238 280.618816 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1936,11 +1936,11 @@ L 593.192238 274.662275 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1948,11 +1948,11 @@ L 593.192238 269.626091 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1960,11 +1960,11 @@ L 593.192238 265.263551 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1972,11 +1972,11 @@ L 593.192238 261.415517 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1984,11 +1984,11 @@ L 593.192238 235.32785 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -1996,11 +1996,11 @@ L 593.192238 222.081092 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2008,11 +2008,11 @@ L 593.192238 212.682367 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2020,11 +2020,11 @@ L 593.192238 205.39215 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2032,11 +2032,11 @@ L 593.192238 199.435609 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2044,11 +2044,11 @@ L 593.192238 194.399425 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2056,11 +2056,11 @@ L 593.192238 190.036884 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2068,11 +2068,11 @@ L 593.192238 186.18885 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2080,11 +2080,11 @@ L 593.192238 160.101184 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2092,11 +2092,11 @@ L 593.192238 146.854425 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2104,11 +2104,11 @@ L 593.192238 137.4557 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2116,11 +2116,11 @@ L 593.192238 130.165483 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2128,11 +2128,11 @@ L 593.192238 124.208942 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2140,11 +2140,11 @@ L 593.192238 119.172758 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2152,11 +2152,11 @@ L 593.192238 114.810217 +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 0.8,1.32; stroke-dashoffset: 0; stroke: #b0b0b0; stroke-opacity: 0.15; stroke-width: 0.8"/> - + @@ -2419,7 +2419,7 @@ L 579.160832 384.373872 L 583.837968 385.92103 L 588.515103 387.468189 L 593.192238 389.015348 -" clip-path="url(#pe3dfbcd688)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #3498db; stroke-opacity: 0.5; stroke-width: 2"/> +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #3498db; stroke-opacity: 0.5; stroke-width: 2"/> +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #9b59b6; stroke-opacity: 0.5; stroke-width: 2"/> +" clip-path="url(#p4a2506dc74)" style="fill: none; stroke-dasharray: 1.5,2.475; stroke-dashoffset: 0; stroke: #808080; stroke-opacity: 0.4; stroke-width: 1.5"/> - - - + + - - - + + - - + + - - + + - - - + + - - - + + - - + + - - + + - - + + - - - + + - - - + + - - + + - - + + - - - + + - - + + @@ -4427,7 +4427,7 @@ L 68.183125 547.21625 L 76.183125 547.21625 " style="fill: none; stroke: #ffffff; stroke-width: 1.5; stroke-linecap: square"/> - - + @@ -4498,18 +4498,18 @@ L 1199.548447 558.88 L 1199.548447 107.52 L 1097.193036 107.52 z -" clip-path="url(#p693bb7b028)" style="fill: #e74c3c; opacity: 0.1; stroke: #e74c3c; stroke-linejoin: miter"/> +" clip-path="url(#pabdded19dc)" style="fill: #e74c3c; opacity: 0.1; stroke: #e74c3c; stroke-linejoin: miter"/> +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4526,11 +4526,11 @@ L 687.771393 107.52 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4547,11 +4547,11 @@ L 801.499627 107.52 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4568,11 +4568,11 @@ L 915.227861 107.52 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4589,11 +4589,11 @@ L 1028.956096 107.52 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4635,11 +4635,11 @@ z +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4656,11 +4656,11 @@ L 1199.548447 558.88 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4677,11 +4677,11 @@ L 1199.548447 491.176 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4698,11 +4698,11 @@ L 1199.548447 423.472 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4718,11 +4718,11 @@ L 1199.548447 355.768 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4738,11 +4738,11 @@ L 1199.548447 288.064 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -4758,11 +4758,11 @@ L 1199.548447 220.36 +" clip-path="url(#pabdded19dc)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -5006,7 +5006,7 @@ L 1183.178474 119.126536 L 1188.635132 114.774226 L 1194.091789 110.421916 L 1199.548447 106.069606 -" clip-path="url(#p693bb7b028)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #e67e22; stroke-opacity: 0.6; stroke-width: 2"/> +" clip-path="url(#pabdded19dc)" style="fill: none; stroke-dasharray: 7.4,3.2; stroke-dashoffset: 0; stroke: #e67e22; stroke-opacity: 0.6; stroke-width: 2"/> - - - + + - - - + + - - + + - - + + - - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - - + + - - - + + @@ -5939,7 +5939,7 @@ L 674.539334 129.74125 L 682.539334 129.74125 " style="fill: none; stroke: #ffffff; stroke-width: 1.5; stroke-linecap: square"/> - - + @@ -6078,12 +6078,12 @@ z - - + @@ -6099,7 +6099,7 @@ L 3.5 0 - + @@ -6115,7 +6115,7 @@ L 3.5 0 - + @@ -6131,7 +6131,7 @@ L 3.5 0 - + @@ -6147,7 +6147,7 @@ L 3.5 0 - + @@ -6163,7 +6163,7 @@ L 3.5 0 - + @@ -6179,7 +6179,7 @@ L 3.5 0 - + @@ -6273,81 +6273,81 @@ z - - - + + - - + + - - - + + - - + + - - + + - - + + - - + + - - + + - - + + - - - + + - - + + @@ -7616,10 +7616,10 @@ z - + - + diff --git a/energetic-scaling/output/energetic_scaling_highres.png b/energetic-scaling/output/energetic_scaling_highres.png index d690173..7edf0b0 100644 Binary files a/energetic-scaling/output/energetic_scaling_highres.png and b/energetic-scaling/output/energetic_scaling_highres.png differ diff --git a/energetic-scaling/output/energetic_scaling_interactive.html b/energetic-scaling/output/energetic_scaling_interactive.html index 9781c5c..040bc78 100644 --- a/energetic-scaling/output/energetic_scaling_interactive.html +++ b/energetic-scaling/output/energetic_scaling_interactive.html @@ -1,7 +1,7 @@ -
    -
    +
    +
    \ No newline at end of file diff --git a/energy-leverage-per-person/index.html b/energy-leverage-per-person/index.html index a7020eb..b88d632 100644 --- a/energy-leverage-per-person/index.html +++ b/energy-leverage-per-person/index.html @@ -18,7 +18,7 @@

    Energy Leverage per Person

    From ~2× to ~17× metabolic baseline (foragers → modern)

    - + Energy Leverage per Person @@ -50,6 +50,9 @@

    Era Progression

    About

    Two lines on log scale: total energy (orange) and external energy (blue), with metabolic baseline reference (~114 W). Shows how fossil fuels and electrification multiplied human energy command.

    + +

    How to Read This Chart

    +

    The year 2000 point is a plotted anchor for the post-1950 modern average, not a single-year global measurement. Earlier labels are period anchors from the source synthesis.

    diff --git a/energy-leverage-per-person/output/energy_leverage.png b/energy-leverage-per-person/output/energy_leverage.png index fe0fe2e..519cc4d 100644 Binary files a/energy-leverage-per-person/output/energy_leverage.png and b/energy-leverage-per-person/output/energy_leverage.png differ diff --git a/energy-leverage-per-person/output/energy_leverage.svg b/energy-leverage-per-person/output/energy_leverage.svg index 1932126..92b3972 100644 --- a/energy-leverage-per-person/output/energy_leverage.svg +++ b/energy-leverage-per-person/output/energy_leverage.svg @@ -6,11 +6,11 @@ - 2026-04-24T11:58:41.598658 + 2026-04-24T12:52:13.674188 image/svg+xml - Matplotlib v3.10.8, https://matplotlib.org/ + Matplotlib v3.10.9, https://matplotlib.org/ @@ -40,33 +40,33 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke-dasharray: 1.5,2.475; stroke-dashoffset: 0; stroke: #888888; stroke-width: 1.5"/> +" clip-path="url(#p2526d9e475)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #cccccc"/> +" clip-path="url(#p2526d9e475)" style="fill: none; stroke-dasharray: 3.7,1.6; stroke-dashoffset: 0; stroke: #cccccc"/> +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -144,11 +144,11 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -193,11 +193,11 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -249,11 +249,11 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -267,11 +267,11 @@ L 585.872054 48.51475 +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -304,11 +304,11 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -337,11 +337,11 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -358,11 +358,11 @@ L 686.267673 48.51475 +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.2; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -505,16 +505,16 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - - + @@ -530,11 +530,11 @@ L -3.5 0 +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -550,11 +550,11 @@ L 696.835633 271.331354 +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -570,11 +570,11 @@ L 696.835633 195.93955 +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -603,11 +603,11 @@ z +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #b0b0b0; stroke-opacity: 0.3; stroke-width: 0.8; stroke-linecap: square"/> - + @@ -624,89 +624,89 @@ L 696.835633 81.876155 - - + - + - + - + - + - + - + - + - + - + - + - + @@ -920,9 +920,9 @@ L 638.711854 248.796779 L 678.341703 183.294583 L 686.267673 151.134355 L 691.551653 84.680872 -" clip-path="url(#p3d2e38d0f5)" style="fill: none; stroke: #e67e22; stroke-width: 2.5; stroke-linecap: square"/> +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #e67e22; stroke-width: 2.5; stroke-linecap: square"/> - - - - - - - - + + + + + + + @@ -950,9 +950,9 @@ L 638.711854 295.588467 L 678.341703 201.208727 L 686.267673 162.815573 L 691.551653 89.685892 -" clip-path="url(#p3d2e38d0f5)" style="fill: none; stroke: #3498db; stroke-width: 2.5; stroke-linecap: square"/> +" clip-path="url(#p2526d9e475)" style="fill: none; stroke: #3498db; stroke-width: 2.5; stroke-linecap: square"/> - - - - - - - - + + + + + + + @@ -1471,7 +1471,7 @@ L 74.574062 60.303344 L 83.574062 60.303344 " style="fill: none; stroke: #e67e22; stroke-width: 2.5; stroke-linecap: square"/> - + @@ -1720,7 +1720,7 @@ L 74.574062 73.513656 L 83.574062 73.513656 " style="fill: none; stroke: #3498db; stroke-width: 2.5; stroke-linecap: square"/> - + @@ -1953,7 +1953,7 @@ z - + diff --git a/energy-leverage-per-person/output/energy_leverage_highres.png b/energy-leverage-per-person/output/energy_leverage_highres.png index a4dca94..5323edf 100644 Binary files a/energy-leverage-per-person/output/energy_leverage_highres.png and b/energy-leverage-per-person/output/energy_leverage_highres.png differ diff --git a/energy-leverage-per-person/output/energy_leverage_interactive.html b/energy-leverage-per-person/output/energy_leverage_interactive.html index e86d5eb..c3da078 100644 --- a/energy-leverage-per-person/output/energy_leverage_interactive.html +++ b/energy-leverage-per-person/output/energy_leverage_interactive.html @@ -1,7 +1,7 @@ -
    -
    +
    +
    \ No newline at end of file diff --git a/index.html b/index.html index f3b1b7f..9512cf0 100644 --- a/index.html +++ b/index.html @@ -3,215 +3,254 @@ - Plots – Interactive Timelines of Exponential Tech Progress - + Exponential Progress Atlas + - -

    Plots

    -

    Six interactive timelines exploring exponential tech progress: compute power, adoption speed, energetic efficiency, civilization scaling, energy leverage, and LLM model sizes.

    + +
    +

    Manifest-driven data visualization atlas

    +

    Exponential Progress Atlas

    +

    Interactive timelines showing how compute, energy, coordination, memory, and adoption compound into civilizational acceleration.

    + +
    - +
    +

    Atlas Thesis

    +
    + Energy + Compute + Memory + Coordination + Adoption + Capability Acceleration +
    +

    Each chart is an audit surface: historical observations, estimates, proxies, and speculative projections are labeled so the story remains readable without hiding uncertainty.

    +
    + +
    +

    How to Read These Charts

    +

    Log scales turn multiplicative change into visible slopes. Circle markers indicate observed or estimated history; alternate markers and badges identify proxies, forecasts, and source-review needs. Use the interactive links for hover text and the data links to inspect source fields directly.

    +
    -
    -
    - - AI Compute Timeline +
    +
    + + AI Compute Timeline chart preview for the Exponential Progress Atlas
    -

    1. AI Compute Timeline

    -
    10²⁷FLOPs frontier 2026
    -

    Training FLOPs from vacuum tubes to frontier models. Exponential growth.

    +
    1. AI Compute
    +

    AI Compute Timeline

    +
    HistoricalEstimatedSpeculative
    +
    10^27+ FLOPsData confidence: Mixed
    +

    Training compute from early electronic computing to frontier AI, with proxies and speculative projections labeled separately.

    -
    + -
    - - Adoption Timeline +
    + + Adoption Timeline chart preview for the Exponential Progress Atlas
    -

    2. Adoption Timeline

    -
    60×faster adoption since 1969
    -

    Time to ~50M users: years → days. Exponential compression.

    +
    2. Adoption
    +

    Adoption Timeline

    +
    HistoricalEstimatedSpeculative
    +
    60x fasterData confidence: Mixed
    +

    Time-to-scale proxies across computing, connectivity, mobile, cloud, and AI paradigms.

    -
    + -
    - - Energetic Scaling +
    + + Energetic Scaling chart preview for the Exponential Progress Atlas
    -

    3. Energetic Scaling

    -
    10⁶×efficiency gain per dollar
    -

    Neurons/kg vs cps/$. Power laws – humans & AI as outliers.

    +
    3. Energetic
    +

    Energetic Scaling

    +
    HistoricalEstimatedSpeculative
    +
    10^6x+ efficiencyData confidence: Mixed
    +

    Biology, hardware efficiency, AI training compute, and foraging energetics compared with clean source datasets.

    -
    + -
    - - Civilization Scaling +
    + + Civilization Scaling chart preview for the Exponential Progress Atlas
    -

    4. Civilization Scaling

    -
    1M yrsof infrastructure stacking
    -

    Energy/Coord/Memory/Repl over 1M years. Stacking infrastructure.

    +
    4. Civilization
    +

    Civilization Scaling

    +
    HistoricalEstimatedSpeculative
    +
    5 lanesData confidence: Mixed
    +

    Five civilizational lanes: energy, coordination, memory, replication, and latency over log-time.

    -
    + -
    - - Energy Leverage +
    + + Energy Leverage chart preview for the Exponential Progress Atlas
    -

    5. Energy Leverage

    -
    17×metabolic baseline 2024
    -

    From ~2× to ~17× metabolic baseline. Energy multiplier growth.

    +
    5. Energy Leverage
    +

    Energy Leverage

    +
    HistoricalHigh confidence
    +
    17x body energyData confidence: High
    +

    Per-person energy command relative to the metabolic baseline, with period anchors labeled explicitly.

    -
    + -
    - - AI Benchmark Progress +
    + + Model Sizes chart preview for the Exponential Progress Atlas
    -

    7. AI Benchmark Progress

    -
    benchmarks crossed human-level
    -

    MMLU, HumanEval, SWE-bench, ARC-AGI crossing human baselines. Capability compression.

    +
    6. Model Sizes
    +

    Model Sizes

    +
    EstimatedSpeculativeNeeds source review
    +
    1.5B -> 5T paramsData confidence: Speculative
    +

    Language model parameter counts over time, separating disclosed counts from estimates and unreleased projections.

    + +
    +
    + +
    + + AI Benchmark Progress chart preview for the Exponential Progress Atlas + +
    +
    7. AI Benchmarks
    +

    AI Benchmark Progress

    +
    HistoricalEstimatedSpeculative
    +
    4 benchmark lanesData confidence: Mixed
    +

    Benchmark progress against human baselines across knowledge, coding, software engineering, and reasoning tasks.

    -
    + -
    - - Cost to Train +
    + + Cost to Train chart preview for the Exponential Progress Atlas
    -

    8. Cost-to-Train Frontier

    -
    10²¹×cheaper per FLOP since 2012
    -

    FLOPs explode but $/FLOP collapses. The efficiency paradox.

    +
    8. Cost to Train
    +

    Cost to Train

    +
    HistoricalEstimatedSpeculative
    +
    $/FLOP collapseData confidence: Mixed
    +

    Training cost, FLOPs, and capability over time, showing the efficiency paradox at the frontier.

    -
    + -
    - -
    - Unified Dashboard → -
    +
    + +
    -

    9. Unified Dashboard

    -
    6 lanessynchronized timeline
    -

    All plots on one singularity view. Zoom from 1M years ago to today.

    +
    9. Dashboard
    +

    Unified Dashboard

    +
    HistoricalEstimatedSpeculative
    +
    9 atlas entriesData confidence: Mixed
    +

    A synchronized overview of the atlas inventory using the same manifest as the homepage, README, build, and validator.

    -
    -
    + + -
    -

    Why These Plots?

    +
    +

    Canonical Inventory

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + +
    PlotShowsTrend
    ComputeTraining FLOPs (10⁰ → 10²⁷)Exponential growth
    AdoptionTime to 50M usersExponential compression
    EnergeticNeurons/kg & cps/$Power laws (log-log)
    CivilizationEnergy/Coord/Memory/ReplStacking infrastructure
    Energy LeverageWatts/person vs metabolic~17× body energy
    Model SizesLLM params (1.5B → 5T)Exponential growth
    #EntryScopeConfidence
    1AI Compute TimelineTraining compute from early electronic computing to frontier AI, with proxies and speculative projections labeled separately.mixed
    2Adoption TimelineTime-to-scale proxies across computing, connectivity, mobile, cloud, and AI paradigms.mixed
    3Energetic ScalingBiology, hardware efficiency, AI training compute, and foraging energetics compared with clean source datasets.mixed
    4Civilization ScalingFive civilizational lanes: energy, coordination, memory, replication, and latency over log-time.mixed
    5Energy LeveragePer-person energy command relative to the metabolic baseline, with period anchors labeled explicitly.high
    6Model SizesLanguage model parameter counts over time, separating disclosed counts from estimates and unreleased projections.speculative
    7AI Benchmark ProgressBenchmark progress against human baselines across knowledge, coding, software engineering, and reasoning tasks.mixed
    8Cost to TrainTraining cost, FLOPs, and capability over time, showing the efficiency paradox at the frontier.mixed
    9Unified DashboardA synchronized overview of the atlas inventory using the same manifest as the homepage, README, build, and validator.mixed
    -

    Together they show how scaling laws drive exponential progress – in biology, hardware, AI, and energy. Infrastructure layers stack to enable each leap.

    -

    Inspired by Kurzweil, Epoch AI, Statista, Herculano-Houzel, Smil.

    -
    +

    - Sources: See each plot's data/meta.json | - Built with Plotly, Matplotlib & D3.js | + Inventory: plots_manifest.json | + Built with Plotly, Matplotlib, D3.js | GitHub | MIT License

    diff --git a/model-sizes/README.md b/model-sizes/README.md new file mode 100644 index 0000000..01ad503 --- /dev/null +++ b/model-sizes/README.md @@ -0,0 +1,17 @@ +# Model Sizes + +Language model parameter counts from 2019 to 2026. + +The chart separates disclosed parameter counts from estimates and unreleased projections. Treat the post-2020 frontier values as source-review targets, not confirmed disclosures. + +## Outputs + +- [Interactive](output/model_sizes_interactive.html) +- [PNG](output/model_sizes_highres.png) +- [SVG](output/model_sizes.svg) +- [Data](data/llm_model_sizes.csv) +- [Metadata](data/meta.json) + +## Data Confidence + +Confidence is speculative overall because no frontier lab has consistently disclosed official parameter counts since GPT-3. Disclosed open-weight or paper-backed entries are stronger than rumored frontier estimates. diff --git a/model-sizes/index.html b/model-sizes/index.html index 75e7c58..1e0cf7d 100644 --- a/model-sizes/index.html +++ b/model-sizes/index.html @@ -14,12 +14,14 @@

    LLM Model Sizes Over Time

    Parameter counts from 2019 – 2026  ·  Log scale  ·  38 models across 11 organizations

    - + LLM Model Sizes Over Time @@ -53,6 +55,9 @@

    About

  • Era bands: Pre-ChatGPT / Foundation Models / Open Weights / Reasoning & Agents
  • Caveat: No frontier lab has disclosed official parameter counts since GPT-3 (2020). All 2023+ values are community estimates derived from leaks, compute analysis, and scaling projections.

    + +

    How to Read This Chart

    +

    Circles are disclosed counts. Diamonds are estimates. Open markers indicate unreleased or not publicly available models. Treat frontier values after GPT-3 as estimates unless the data row says otherwise.

    diff --git a/model-sizes/output/model_sizes.png b/model-sizes/output/model_sizes.png new file mode 100644 index 0000000..e6bc024 Binary files /dev/null and b/model-sizes/output/model_sizes.png differ diff --git a/model-sizes/output/model_sizes.svg b/model-sizes/output/model_sizes.svg new file mode 100644 index 0000000..ac0d718 --- /dev/null +++ b/model-sizes/output/model_sizes.svg @@ -0,0 +1,3218 @@ + + + + + + + + 2026-04-24T12:52:15.286196 + image/svg+xml + + + Matplotlib v3.10.9, https://matplotlib.org/ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/model-sizes/output/model_sizes_highres.png b/model-sizes/output/model_sizes_highres.png new file mode 100644 index 0000000..d82b09e Binary files /dev/null and b/model-sizes/output/model_sizes_highres.png differ diff --git a/model-sizes/output/model_sizes_interactive.html b/model-sizes/output/model_sizes_interactive.html index f04c55c..7fcb5e5 100644 --- a/model-sizes/output/model_sizes_interactive.html +++ b/model-sizes/output/model_sizes_interactive.html @@ -1,666 +1,7 @@ - - - - - -LLM Parameter Sizes Over Time - - - + + -

    Large Language Model Scale Over Time

    -

    Parameter counts from 2019 – 2026  ·  Log scale  ·  Dashed circles = estimated  ·  ✦ = not yet released

    - -
    -
    -
    - - -
    -
    -
    -
    -
    -
    -
    - Estimated / rumored parameter count -
    -
    -
    - ✦ Not yet publicly released -
    -
    - - +
    +
    - + \ No newline at end of file diff --git a/model-sizes/src/model_sizes.py b/model-sizes/src/model_sizes.py new file mode 100644 index 0000000..e0d2647 --- /dev/null +++ b/model-sizes/src/model_sizes.py @@ -0,0 +1,110 @@ +#!/usr/bin/env python3 +"""Static chart for LLM model size growth.""" + +from __future__ import annotations + +import os + +import matplotlib.pyplot as plt +import pandas as pd + + +ORG_COLORS = { + "OpenAI": "#2563eb", + "Google": "#16a34a", + "Meta": "#7c3aed", + "Anthropic": "#dc2626", + "Mistral": "#ea580c", + "DeepSeek": "#0891b2", +} + + +def load_data() -> pd.DataFrame: + script_dir = os.path.dirname(os.path.abspath(__file__)) + csv_path = os.path.join(os.path.dirname(script_dir), "data", "llm_model_sizes.csv") + df = pd.read_csv(csv_path, parse_dates=["date"]) + df["year"] = df["date"].dt.year + (df["date"].dt.dayofyear - 1) / 365.25 + return df.sort_values("date") + + +def create_chart(df: pd.DataFrame): + fig, ax = plt.subplots(figsize=(11, 6.5), dpi=150) + fig.patch.set_facecolor("white") + ax.set_facecolor("#FAFAFA") + + eras = [ + (2019, 2022.9, "#E8F4FD", "Pre-ChatGPT"), + (2022.9, 2024.2, "#F0E8FF", "Foundation models"), + (2024.2, 2025.1, "#E8F8E8", "Open weights"), + (2025.1, 2026.5, "#FFE8E8", "Reasoning & agents"), + ] + for start, end, color, label in eras: + ax.axvspan(start, end, color=color, alpha=0.45, zorder=0) + ax.text((start + end) / 2, 7500, label, ha="center", va="top", fontsize=8, color="#555") + + for _, row in df.iterrows(): + color = ORG_COLORS.get(row["org"], "#6b7280") + marker = "D" if bool(row["estimated"]) else "o" + alpha = 0.55 if bool(row["estimated"]) else 0.9 + edge = "#111827" if bool(row["unreleased"]) else "white" + ax.scatter(row["year"], row["params_billions"], s=95, c=color, marker=marker, + alpha=alpha, edgecolors=edge, linewidths=1.4, zorder=3) + + labels = { + "GPT-2", "GPT-3", "ChatGPT", "GPT-4", "LLaMA 3.1 405B", + "DeepSeek R1", "Claude Opus 4.7", "GPT Spud" + } + for _, row in df.iterrows(): + if row["name"] in labels: + ax.annotate( + row["name"], + xy=(row["year"], row["params_billions"]), + xytext=(4, 8), + textcoords="offset points", + fontsize=8, + arrowprops=dict(arrowstyle="-", color="#777", lw=0.5), + ) + + ax.set_yscale("log") + ax.set_xlim(2018.8, 2026.55) + ax.set_ylim(1, 10000) + ax.set_xlabel("Year", fontweight="bold") + ax.set_ylabel("Parameters (billions, log scale)", fontweight="bold") + ax.set_title("LLM Model Sizes Over Time\nDisclosed counts vs. estimates and unreleased projections", fontweight="bold", pad=14) + ax.grid(True, which="major", axis="y", alpha=0.25) + ax.grid(True, which="major", axis="x", alpha=0.15) + + handles = [ + plt.Line2D([0], [0], marker="o", color="w", markerfacecolor="#6b7280", markeredgecolor="white", markersize=8, label="Disclosed"), + plt.Line2D([0], [0], marker="D", color="w", markerfacecolor="#6b7280", markeredgecolor="white", markersize=8, label="Estimated"), + plt.Line2D([0], [0], marker="D", color="w", markerfacecolor="#6b7280", markeredgecolor="#111827", markersize=8, label="Unreleased"), + ] + ax.legend(handles=handles, loc="lower right", framealpha=0.9) + ax.text( + 0.01, + 0.02, + "Footnote: frontier labs generally stopped disclosing parameter counts after GPT-3. 2023+ points are estimates unless marked otherwise.", + transform=ax.transAxes, + fontsize=8, + color="#555", + bbox=dict(boxstyle="round,pad=0.4", facecolor="white", edgecolor="#ddd", alpha=0.9), + ) + + plt.tight_layout() + return fig + + +def main() -> None: + script_dir = os.path.dirname(os.path.abspath(__file__)) + output_dir = os.path.join(os.path.dirname(script_dir), "output") + os.makedirs(output_dir, exist_ok=True) + + fig = create_chart(load_data()) + fig.savefig(os.path.join(output_dir, "model_sizes.png"), dpi=300, bbox_inches="tight", facecolor="white") + fig.savefig(os.path.join(output_dir, "model_sizes_highres.png"), dpi=400, bbox_inches="tight", facecolor="white") + fig.savefig(os.path.join(output_dir, "model_sizes.svg"), format="svg", bbox_inches="tight", facecolor="white") + print("Saved model-sizes static outputs") + + +if __name__ == "__main__": + main() diff --git a/model-sizes/src/model_sizes_plotly.py b/model-sizes/src/model_sizes_plotly.py new file mode 100644 index 0000000..40e6a6b --- /dev/null +++ b/model-sizes/src/model_sizes_plotly.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python3 +"""Interactive Plotly chart for LLM model size growth.""" + +from __future__ import annotations + +import os + +import pandas as pd +import plotly.graph_objects as go + + +ORG_COLORS = { + "OpenAI": "#2563eb", + "Google": "#16a34a", + "Meta": "#7c3aed", + "Anthropic": "#dc2626", + "Mistral": "#ea580c", + "DeepSeek": "#0891b2", +} + + +def load_data() -> pd.DataFrame: + script_dir = os.path.dirname(os.path.abspath(__file__)) + csv_path = os.path.join(os.path.dirname(script_dir), "data", "llm_model_sizes.csv") + return pd.read_csv(csv_path, parse_dates=["date"]).sort_values("date") + + +def create_chart(df: pd.DataFrame) -> go.Figure: + fig = go.Figure() + for org in df["org"].unique(): + org_df = df[df["org"] == org] + symbols = [ + "diamond-open" if row.unreleased else ("diamond" if row.estimated else "circle") + for row in org_df.itertuples() + ] + hover = [ + f"{row.name}
    Date: {row.date.date()}
    Org: {row.org}
    " + f"Params: {row.params_billions:g}B
    Estimated: {row.estimated}
    " + f"Unreleased: {row.unreleased}
    {row.note}" + for row in org_df.itertuples() + ] + fig.add_trace(go.Scatter( + x=org_df["date"], + y=org_df["params_billions"], + mode="markers", + marker=dict(size=12, color=ORG_COLORS.get(org, "#6b7280"), symbol=symbols, line=dict(width=1, color="white")), + name=org, + text=hover, + hoverinfo="text", + )) + + for start, end, label, color in [ + ("2019-01-01", "2022-11-30", "Pre-ChatGPT", "rgba(96,165,250,0.10)"), + ("2022-11-30", "2024-04-01", "Foundation models", "rgba(167,139,250,0.10)"), + ("2024-04-01", "2025-01-01", "Open weights", "rgba(34,197,94,0.10)"), + ("2025-01-01", "2026-07-01", "Reasoning & agents", "rgba(248,113,113,0.10)"), + ]: + x0 = pd.to_datetime(start) + x1 = pd.to_datetime(end) + fig.add_vrect(x0=x0, x1=x1, fillcolor=color, line_width=0) + fig.add_annotation(x=x0 + (x1 - x0) / 2, y=8000, text=label, showarrow=False, font=dict(size=10, color="#666")) + + fig.update_layout( + title="LLM Model Sizes Over Time
    Disclosed counts vs. estimates and unreleased projections", + xaxis_title="Release or projected date", + yaxis=dict(title="Parameters (billions, log scale)", type="log"), + legend=dict(orientation="h", y=1.02, x=0.5, xanchor="center", yanchor="bottom"), + plot_bgcolor="#FAFAFA", + paper_bgcolor="white", + hovermode="closest", + autosize=True, + margin=dict(t=110, b=80, l=70, r=40), + ) + fig.add_annotation( + text="Footnote: 2023+ frontier parameter counts are often estimates. Open markers indicate unreleased or not publicly available models.", + xref="paper", + yref="paper", + x=0.5, + y=-0.16, + showarrow=False, + font=dict(size=10, color="#666"), + ) + return fig + + +def main() -> None: + script_dir = os.path.dirname(os.path.abspath(__file__)) + output_dir = os.path.join(os.path.dirname(script_dir), "output") + os.makedirs(output_dir, exist_ok=True) + fig = create_chart(load_data()) + fig.write_html(os.path.join(output_dir, "model_sizes_interactive.html"), include_plotlyjs="cdn") + print("Saved model_sizes_interactive.html") + + +if __name__ == "__main__": + main() diff --git a/plots_manifest.json b/plots_manifest.json new file mode 100644 index 0000000..f4d50f5 --- /dev/null +++ b/plots_manifest.json @@ -0,0 +1,155 @@ +[ + { + "id": "ai-compute-timeline", + "kind": "plot", + "order": 1, + "status": "published", + "title": "AI Compute Timeline", + "short_title": "AI Compute", + "description": "Training compute from early electronic computing to frontier AI, with proxies and speculative projections labeled separately.", + "hero_stat": "10^27+ FLOPs", + "interactive": "ai-compute-timeline/output/ai_compute_timeline_interactive.html", + "png": "ai-compute-timeline/output/ai_compute_timeline_highres.png", + "svg": "ai-compute-timeline/output/ai_compute_timeline.svg", + "data": "ai-compute-timeline/data/ai_milestones.csv", + "metadata": "ai-compute-timeline/data/meta.json", + "readme": "ai-compute-timeline/README.md", + "confidence": "mixed" + }, + { + "id": "adoption-timeline", + "kind": "plot", + "order": 2, + "status": "published", + "title": "Adoption Timeline", + "short_title": "Adoption", + "description": "Time-to-scale proxies across computing, connectivity, mobile, cloud, and AI paradigms.", + "hero_stat": "60x faster", + "interactive": "adoption-timeline/output/adoption_timeline_interactive.html", + "png": "adoption-timeline/output/adoption_timeline_highres.png", + "svg": "adoption-timeline/output/adoption_timeline.svg", + "data": "adoption-timeline/data/tech_adoption.csv", + "metadata": "adoption-timeline/data/meta.json", + "readme": "adoption-timeline/README.md", + "confidence": "mixed" + }, + { + "id": "energetic-scaling", + "kind": "plot", + "order": 3, + "status": "published", + "title": "Energetic Scaling", + "short_title": "Energetic", + "description": "Biology, hardware efficiency, AI training compute, and foraging energetics compared with clean source datasets.", + "hero_stat": "10^6x+ efficiency", + "interactive": "energetic-scaling/output/energetic_scaling_interactive.html", + "png": "energetic-scaling/output/energetic_scaling_highres.png", + "svg": "energetic-scaling/output/energetic_scaling.svg", + "data": "energetic-scaling/data/scaling_data.csv", + "metadata": "energetic-scaling/data/meta.json", + "readme": "energetic-scaling/README.md", + "confidence": "mixed" + }, + { + "id": "civilization-scaling", + "kind": "plot", + "order": 4, + "status": "published", + "title": "Civilization Scaling", + "short_title": "Civilization", + "description": "Five civilizational lanes: energy, coordination, memory, replication, and latency over log-time.", + "hero_stat": "5 lanes", + "interactive": "civilization-scaling/output/civilization_scaling_interactive.html", + "png": "civilization-scaling/output/civilization_scaling_highres.png", + "svg": "civilization-scaling/output/civilization_scaling.svg", + "data": "civilization-scaling/data/civilization_metrics.csv", + "metadata": "civilization-scaling/data/meta.json", + "readme": "civilization-scaling/README.md", + "confidence": "mixed" + }, + { + "id": "energy-leverage-per-person", + "kind": "plot", + "order": 5, + "status": "published", + "title": "Energy Leverage", + "short_title": "Energy Leverage", + "description": "Per-person energy command relative to the metabolic baseline, with period anchors labeled explicitly.", + "hero_stat": "17x body energy", + "interactive": "energy-leverage-per-person/output/energy_leverage_interactive.html", + "png": "energy-leverage-per-person/output/energy_leverage_highres.png", + "svg": "energy-leverage-per-person/output/energy_leverage.svg", + "data": "energy-leverage-per-person/data/energy_leverage_datapoints.csv", + "metadata": "energy-leverage-per-person/data/meta.json", + "readme": "energy-leverage-per-person/README.md", + "confidence": "high" + }, + { + "id": "model-sizes", + "kind": "plot", + "order": 6, + "status": "published", + "title": "Model Sizes", + "short_title": "Model Sizes", + "description": "Language model parameter counts over time, separating disclosed counts from estimates and unreleased projections.", + "hero_stat": "1.5B -> 5T params", + "interactive": "model-sizes/output/model_sizes_interactive.html", + "png": "model-sizes/output/model_sizes_highres.png", + "svg": "model-sizes/output/model_sizes.svg", + "data": "model-sizes/data/llm_model_sizes.csv", + "metadata": "model-sizes/data/meta.json", + "readme": "model-sizes/README.md", + "confidence": "speculative" + }, + { + "id": "ai-benchmark-progress", + "kind": "plot", + "order": 7, + "status": "published", + "title": "AI Benchmark Progress", + "short_title": "AI Benchmarks", + "description": "Benchmark progress against human baselines across knowledge, coding, software engineering, and reasoning tasks.", + "hero_stat": "4 benchmark lanes", + "interactive": "ai-benchmark-progress/output/benchmark_progress_interactive.html", + "png": "ai-benchmark-progress/output/benchmark_progress_highres.png", + "svg": "ai-benchmark-progress/output/benchmark_progress.svg", + "data": "ai-benchmark-progress/data/benchmark_data.csv", + "metadata": "ai-benchmark-progress/data/meta.json", + "readme": "ai-benchmark-progress/README.md", + "confidence": "mixed" + }, + { + "id": "cost-to-train", + "kind": "plot", + "order": 8, + "status": "published", + "title": "Cost to Train", + "short_title": "Cost to Train", + "description": "Training cost, FLOPs, and capability over time, showing the efficiency paradox at the frontier.", + "hero_stat": "$/FLOP collapse", + "interactive": "cost-to-train/output/cost_to_train_interactive.html", + "png": "cost-to-train/output/cost_to_train_highres.png", + "svg": "cost-to-train/output/cost_to_train.svg", + "data": "cost-to-train/data/training_costs.csv", + "metadata": "cost-to-train/data/meta.json", + "readme": "cost-to-train/README.md", + "confidence": "mixed" + }, + { + "id": "dashboard", + "kind": "dashboard", + "order": 9, + "status": "published", + "title": "Unified Dashboard", + "short_title": "Dashboard", + "description": "A synchronized overview of the atlas inventory using the same manifest as the homepage, README, build, and validator.", + "hero_stat": "9 atlas entries", + "interactive": "dashboard/index.html", + "png": "", + "svg": "", + "data": "plots_manifest.json", + "metadata": "plots_manifest.json", + "readme": "README.md", + "confidence": "mixed" + } +] diff --git a/scripts/check_accessibility_static.py b/scripts/check_accessibility_static.py new file mode 100644 index 0000000..9a3f46c --- /dev/null +++ b/scripts/check_accessibility_static.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python3 +"""Static accessibility checks for generated pages.""" + +from __future__ import annotations + +import sys +from html.parser import HTMLParser +from pathlib import Path + +from manifest_utils import ROOT + + +SKIP_DIRS = {".git", ".venv", ".pytest_cache"} + + +class AccessibilityParser(HTMLParser): + def __init__(self, file_path: Path) -> None: + super().__init__() + self.file_path = file_path + self.errors: list[str] = [] + self._anchor_stack: list[dict] = [] + + def handle_starttag(self, tag: str, attrs: list[tuple[str, str | None]]) -> None: + attr = dict(attrs) + if tag == "img": + alt = attr.get("alt") + if alt is None or not alt.strip(): + self.errors.append(f"image missing alt text: {attr.get('src', '(missing src)')}") + if tag == "a": + self._anchor_stack.append( + { + "href": attr.get("href", ""), + "aria": (attr.get("aria-label") or attr.get("title") or "").strip(), + "text": "", + "has_labeled_image": False, + } + ) + elif self._anchor_stack and tag == "img": + alt = (attr.get("alt") or "").strip() + if alt: + self._anchor_stack[-1]["has_labeled_image"] = True + + def handle_data(self, data: str) -> None: + if self._anchor_stack: + self._anchor_stack[-1]["text"] += data.strip() + + def handle_endtag(self, tag: str) -> None: + if tag != "a" or not self._anchor_stack: + return + anchor = self._anchor_stack.pop() + has_name = bool(anchor["aria"] or anchor["text"].strip() or anchor["has_labeled_image"]) + if anchor["href"] and not has_name: + self.errors.append(f"link missing accessible name: {anchor['href']}") + + +def main() -> None: + errors: list[str] = [] + for path in ROOT.rglob("*.html"): + if any(part in SKIP_DIRS for part in path.parts): + continue + parser = AccessibilityParser(path) + parser.feed(path.read_text(encoding="utf-8", errors="ignore")) + errors.extend(f"{path.relative_to(ROOT)}: {error}" for error in parser.errors) + + if errors: + print("Static accessibility errors:") + for error in errors: + print(f" ERROR: {error}") + sys.exit(1) + + print("Static accessibility checks passed.") + + +if __name__ == "__main__": + main() diff --git a/scripts/check_links.py b/scripts/check_links.py new file mode 100644 index 0000000..f20679f --- /dev/null +++ b/scripts/check_links.py @@ -0,0 +1,83 @@ +#!/usr/bin/env python3 +"""Check local relative links in HTML and Markdown files.""" + +from __future__ import annotations + +import re +import sys +from html.parser import HTMLParser +from pathlib import Path +from urllib.parse import unquote, urlparse + +from manifest_utils import ROOT + + +SKIP_DIRS = {".git", ".venv", ".pytest_cache", ".agent-tasks"} + + +class LinkParser(HTMLParser): + def __init__(self) -> None: + super().__init__() + self.links: list[str] = [] + + def handle_starttag(self, tag: str, attrs: list[tuple[str, str | None]]) -> None: + attr = dict(attrs) + for key in ("href", "src"): + value = attr.get(key) + if value: + self.links.append(value) + + +def _iter_files() -> list[Path]: + paths: list[Path] = [] + for path in ROOT.rglob("*"): + if any(part in SKIP_DIRS for part in path.parts): + continue + if path.suffix.lower() in {".html", ".md"}: + paths.append(path) + return paths + + +def _extract_links(path: Path) -> list[str]: + text = path.read_text(encoding="utf-8", errors="ignore") + if path.suffix.lower() == ".html": + parser = LinkParser() + parser.feed(text) + return parser.links + md_links = re.findall(r"!?\[[^\]]*\]\(([^)]+)\)", text) + return md_links + + +def _is_local_link(link: str) -> bool: + parsed = urlparse(link) + return not parsed.scheme and not parsed.netloc and not link.startswith("#") and not link.startswith("mailto:") + + +def _resolve_link(path: Path, link: str) -> Path: + clean = unquote(link.split("#", 1)[0].split("?", 1)[0]) + return (path.parent / clean).resolve() + + +def main() -> None: + errors: list[str] = [] + for path in _iter_files(): + for link in _extract_links(path): + if not _is_local_link(link): + continue + if not link.split("#", 1)[0].split("?", 1)[0]: + continue + target = _resolve_link(path, link) + if not target.exists(): + errors.append(f"{path.relative_to(ROOT)} -> missing {link}") + + if errors: + print("Broken relative links:") + for error in errors: + print(f" ERROR: {error}") + sys.exit(1) + + print("All relative links resolved.") + + +if __name__ == "__main__": + main() diff --git a/scripts/generate_homepage.py b/scripts/generate_homepage.py new file mode 100644 index 0000000..cfa3ad9 --- /dev/null +++ b/scripts/generate_homepage.py @@ -0,0 +1,159 @@ +#!/usr/bin/env python3 +"""Generate the root homepage from plots_manifest.json.""" + +from __future__ import annotations + +from html import escape +from pathlib import Path + +from manifest_utils import ROOT, published_entries + + +BADGES = { + "high": ["Historical", "High confidence"], + "medium": ["Historical", "Estimated"], + "mixed": ["Historical", "Estimated", "Speculative"], + "speculative": ["Estimated", "Speculative", "Needs source review"], +} + + +def _card(entry: dict) -> str: + order = entry["order"] + title = escape(entry["title"]) + description = escape(entry["description"]) + hero_stat = escape(entry["hero_stat"]) + confidence = escape(entry["confidence"].title()) + badges = "".join(f'{escape(badge)}' for badge in BADGES[entry["confidence"]]) + aria = f"Open {title} interactive plot" if entry.get("kind") == "plot" else "Open Unified Dashboard" + + if entry.get("png"): + media = ( + f'' + ) + else: + media = ( + '' + ) + + links = [ + ("Interactive", entry["interactive"]), + ("PNG", entry.get("png")), + ("SVG", entry.get("svg")), + ("Data", entry.get("data")), + ("Metadata", entry.get("metadata")), + ] + links_html = "\n".join( + f' {escape(label)}' + for label, url in links + if url + ) + + return f"""
    + + {media} + +
    +
    {order}. {escape(entry["short_title"])}
    +

    {title}

    +
    {badges}
    +
    {hero_stat}Data confidence: {confidence}
    +

    {description}

    + +
    +
    """ + + +def render_homepage(entries: list[dict]) -> str: + cards = "\n\n".join(_card(entry) for entry in entries) + quick_links = "\n".join( + f' {escape(entry["short_title"])}' + for entry in entries + ) + rows = "\n".join( + f" {entry['order']}{escape(entry['title'])}{escape(entry['description'])}{escape(entry['confidence'])}" + for entry in entries + ) + count = len(entries) + + return f""" + + + + + Exponential Progress Atlas + + + + + +
    +

    Manifest-driven data visualization atlas

    +

    Exponential Progress Atlas

    +

    Interactive timelines showing how compute, energy, coordination, memory, and adoption compound into civilizational acceleration.

    + +
    + +
    +

    Atlas Thesis

    +
    + Energy + Compute + Memory + Coordination + Adoption + Capability Acceleration +
    +

    Each chart is an audit surface: historical observations, estimates, proxies, and speculative projections are labeled so the story remains readable without hiding uncertainty.

    +
    + +
    +

    How to Read These Charts

    +

    Log scales turn multiplicative change into visible slopes. Circle markers indicate observed or estimated history; alternate markers and badges identify proxies, forecasts, and source-review needs. Use the interactive links for hover text and the data links to inspect source fields directly.

    +
    + +
    +{cards} +
    + +
    +

    Canonical Inventory

    + + + + + +{rows} + +
    #EntryScopeConfidence
    +
    + + + + + + +""" + + +def main() -> None: + (ROOT / "index.html").write_text(render_homepage(published_entries(ROOT)), encoding="utf-8") + + +if __name__ == "__main__": + main() diff --git a/scripts/generate_readme_links.py b/scripts/generate_readme_links.py new file mode 100644 index 0000000..fe04671 --- /dev/null +++ b/scripts/generate_readme_links.py @@ -0,0 +1,110 @@ +#!/usr/bin/env python3 +"""Generate README.md from the manifest inventory.""" + +from __future__ import annotations + +from manifest_utils import ROOT, published_entries + + +def _entry_section(entry: dict) -> str: + links = [f"- **Interactive**: [{entry['title']}]({entry['interactive']})"] + if entry.get("png"): + links.append(f"- **Static**: [PNG]({entry['png']}) | [SVG]({entry['svg']})") + links.append(f"- **Data**: [{entry['data']}]({entry['data']})") + links.append(f"- **Metadata**: [{entry['metadata']}]({entry['metadata']})") + if entry.get("readme"): + links.append(f"- **Details**: [{entry['id']}/]({entry['id']}/)") + return f"""## {entry['order']}. {entry['title']} + +{entry['description']} + +Hero stat: **{entry['hero_stat']}**. Data confidence: **{entry['confidence']}**. + +{chr(10).join(links)} +""" + + +def render_readme(entries: list[dict]) -> str: + quick_links = "\n".join( + f"- [{entry['title']}]({entry['interactive']})" for entry in entries + ) + sections = "\n---\n\n".join(_entry_section(entry) for entry in entries) + count = len(entries) + return f"""# Exponential Progress Atlas + +Interactive timelines showing how compute, energy, coordination, memory, and adoption compound into civilizational acceleration. + +The root inventory is [`plots_manifest.json`](plots_manifest.json). Homepage cards, README links, build ordering, dashboard lanes, and validation all read from that manifest. + +## Published Inventory ({count}) + +{quick_links} + +--- + +{sections} + +--- + +## Data Contracts + +- `ai-compute-timeline/data/ai_milestones.csv` uses normalized fields: `year,event,category,value_numeric,value_low,value_high,value_unit,estimate_status,source_id,confidence,display_label,notes`. +- `adoption-timeline/data/tech_adoption.csv` includes `adoption_metric_type`, `comparability_level`, `source_id`, `confidence`, and notes so unlike adoption proxies are not treated as perfectly comparable. +- Energetic Scaling keeps comparison-level data in `scaling_data.csv` and splits clean source contracts into `biology_neural_scaling.csv`, `hardware_efficiency.csv`, `ai_training_flops.csv`, and `foraging_lht.csv`. + +## Repository Structure + +Each plot should follow this structure: + +```text +/ +├── index.html +├── data/ +│ ├── .csv +│ └── meta.json +├── output/ +│ ├── *_interactive.html +│ ├── *_highres.png +│ └── *.svg +├── src/ +│ ├── *.py +│ └── *_plotly.py +└── README.md +``` + +## Development + +```bash +python -m pip install -r requirements.txt +python build_all.py +python scripts/generate_homepage.py +python scripts/generate_readme_links.py +python scripts/validate_all.py +python scripts/check_links.py +python scripts/check_accessibility_static.py +``` + +## Adding a New Plot + +1. Create the standard plot directory structure. +2. Add data, metadata, generator scripts, output paths, and README. +3. Add the entry to `plots_manifest.json` with `status: "draft"` until outputs and sources pass validation. +4. Run the build, generators, validators, link checker, and accessibility checker. +5. Change `status` to `"published"` only when the plot should appear on the homepage and dashboard. + +## Deployment + +GitHub Pages deploys should run the same validation commands in CI before publishing. A failed build, broken relative link, missing alt text, stale output, or manifest mismatch should block deployment. + +## License + +MIT +""" + + +def main() -> None: + (ROOT / "README.md").write_text(render_readme(published_entries(ROOT)), encoding="utf-8") + + +if __name__ == "__main__": + main() diff --git a/scripts/generate_sitemap.py b/scripts/generate_sitemap.py new file mode 100644 index 0000000..2810302 --- /dev/null +++ b/scripts/generate_sitemap.py @@ -0,0 +1,38 @@ +#!/usr/bin/env python3 +"""Generate sitemap.xml from the published manifest inventory.""" + +from __future__ import annotations + +from datetime import date +from html import escape + +from manifest_utils import ROOT, published_entries + + +BASE_URL = "https://mschwar.github.io/plots/" + + +def render_sitemap() -> str: + today = date.today().isoformat() + urls = [("", today), ("plots_manifest.json", today)] + urls.extend((entry["interactive"], today) for entry in published_entries(ROOT)) + entries = "\n".join( + " \n" + f" {escape(BASE_URL + path)}\n" + f" {lastmod}\n" + " " + for path, lastmod in urls + ) + return f""" + +{entries} + +""" + + +def main() -> None: + (ROOT / "sitemap.xml").write_text(render_sitemap(), encoding="utf-8") + + +if __name__ == "__main__": + main() diff --git a/scripts/manifest_utils.py b/scripts/manifest_utils.py new file mode 100644 index 0000000..53d2e66 --- /dev/null +++ b/scripts/manifest_utils.py @@ -0,0 +1,53 @@ +"""Shared helpers for the manifest-driven plots site.""" + +from __future__ import annotations + +import json +from pathlib import Path + + +ROOT = Path(__file__).resolve().parents[1] +MANIFEST_PATH = ROOT / "plots_manifest.json" + +ALLOWED_STATUSES = {"published", "draft", "archived"} +ALLOWED_CONFIDENCE = {"high", "medium", "mixed", "speculative"} +REQUIRED_MANIFEST_FIELDS = { + "id", + "order", + "status", + "title", + "short_title", + "description", + "hero_stat", + "interactive", + "png", + "svg", + "data", + "metadata", + "readme", + "confidence", +} + + +def load_manifest(root: Path | None = None) -> list[dict]: + """Load and sort the root plots manifest.""" + manifest_path = (root or ROOT) / "plots_manifest.json" + with manifest_path.open("r", encoding="utf-8") as f: + entries = json.load(f) + return sorted(entries, key=lambda entry: entry["order"]) + + +def published_entries(root: Path | None = None) -> list[dict]: + """Return published manifest entries in display order.""" + return [entry for entry in load_manifest(root) if entry["status"] == "published"] + + +def plot_entries(root: Path | None = None, *, published_only: bool = False) -> list[dict]: + """Return plot entries, optionally excluding drafts and archived entries.""" + entries = published_entries(root) if published_only else load_manifest(root) + return [entry for entry in entries if entry.get("kind", "plot") == "plot"] + + +def entry_path(entry: dict, key: str, root: Path | None = None) -> Path: + """Resolve a manifest path field to an absolute path.""" + return (root or ROOT) / entry[key] diff --git a/scripts/validate_all.py b/scripts/validate_all.py index 1dd8170..0ab48dd 100644 --- a/scripts/validate_all.py +++ b/scripts/validate_all.py @@ -1,190 +1,391 @@ #!/usr/bin/env python3 -""" -Validate all plots: check meta.json fields match CSV headers, required files exist. -""" +"""Validate manifest, plot files, data schemas, and generated site surfaces.""" + +from __future__ import annotations -import os -import json import csv +import json +import re import sys +from html.parser import HTMLParser +from pathlib import Path +from urllib.parse import urlparse -PLOTS_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) +from manifest_utils import ( + ALLOWED_CONFIDENCE, + ALLOWED_STATUSES, + REQUIRED_MANIFEST_FIELDS, + ROOT, + entry_path, + load_manifest, + plot_entries, + published_entries, +) -PLOTS = [ - { - 'name': 'ai-compute-timeline', - 'csv': 'data/ai_milestones.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/ai_compute_timeline_interactive.html', - 'output/ai_compute_timeline_highres.png', - 'output/ai_compute_timeline.svg', - 'index.html', - ] - }, - { - 'name': 'adoption-timeline', - 'csv': 'data/tech_adoption.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/adoption_timeline_interactive.html', - 'output/adoption_timeline_highres.png', - 'output/adoption_timeline.svg', - 'index.html', - ] - }, - { - 'name': 'energetic-scaling', - 'csv': 'data/scaling_data.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/energetic_scaling_interactive.html', - 'output/energetic_scaling_highres.png', - 'output/energetic_scaling.svg', - 'index.html', - ] - }, - { - 'name': 'civilization-scaling', - 'csv': 'data/civilization_metrics.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/civilization_scaling_interactive.html', - 'output/civilization_scaling_highres.png', - 'output/civilization_scaling.svg', - 'index.html', - ] - }, - { - 'name': 'energy-leverage-per-person', - 'csv': 'data/energy_leverage_datapoints.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/energy_leverage_interactive.html', - 'output/energy_leverage_highres.png', - 'output/energy_leverage.svg', - 'index.html', - ] - }, - { - 'name': 'ai-benchmark-progress', - 'csv': 'data/benchmark_data.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/benchmark_progress_interactive.html', - 'output/benchmark_progress_highres.png', - 'output/benchmark_progress.svg', - 'index.html', - ] + +PLOTS_DIR = str(ROOT) + +SCHEMA_REQUIREMENTS = { + "ai-compute-timeline": { + "required": { + "year", + "event", + "category", + "value_numeric", + "value_low", + "value_high", + "value_unit", + "estimate_status", + "source_id", + "confidence", + "display_label", + "notes", + }, + "numeric": {"year", "value_numeric", "value_low", "value_high"}, + "year": {"year"}, }, - { - 'name': 'cost-to-train', - 'csv': 'data/training_costs.csv', - 'meta': 'data/meta.json', - 'required_files': [ - 'output/cost_to_train_interactive.html', - 'output/cost_to_train_highres.png', - 'output/cost_to_train.svg', - 'index.html', - ] + "adoption-timeline": { + "required": { + "Year", + "Event", + "Category", + "Days_to_Adoption", + "Impact", + "adoption_metric_type", + "comparability_level", + "source_id", + "confidence", + "comparability_notes", + "notes", + }, + "numeric": {"Year", "Days_to_Adoption"}, + "year": {"Year"}, }, -] +} +GENERIC_NUMERIC_PATTERNS = ( + "year", + "years_ago", + "metric_value", + "days_to_adoption", + "params_billions", + "score", + "baseline", + "flops", + "cost_million_usd", + "dollar_per_flop", + "capability_score", + "efficiency_gain", + "multiple_vs_metabolic", + "watts", + "energy_total", + "energy_external", +) -def validate_plot(plot_config): - """Validate a single plot directory.""" - name = plot_config['name'] - plot_dir = os.path.join(PLOTS_DIR, name) - errors = [] - warnings = [] - # Check directory exists - if not os.path.isdir(plot_dir): - errors.append(f"Directory not found: {name}/") - return errors, warnings +class ImageAltParser(HTMLParser): + def __init__(self) -> None: + super().__init__() + self.errors: list[str] = [] - # Check required files - for req_file in plot_config['required_files']: - file_path = os.path.join(plot_dir, req_file) - if not os.path.isfile(file_path): - errors.append(f"Missing: {name}/{req_file}") + def handle_starttag(self, tag: str, attrs: list[tuple[str, str | None]]) -> None: + if tag.lower() != "img": + return + attr = dict(attrs) + alt = attr.get("alt") + src = attr.get("src", "(missing src)") + if alt is None or not alt.strip(): + self.errors.append(f"Image missing alt text: {src}") - # Check CSV exists - csv_path = os.path.join(plot_dir, plot_config['csv']) - if not os.path.isfile(csv_path): - errors.append(f"Missing CSV: {name}/{plot_config['csv']}") - return errors, warnings - # Check meta.json exists - meta_path = os.path.join(plot_dir, plot_config['meta']) - if not os.path.isfile(meta_path): - errors.append(f"Missing meta.json: {name}/{plot_config['meta']}") - return errors, warnings +def _read_csv(path: Path) -> tuple[list[str], list[dict], list[str]]: + errors: list[str] = [] + try: + with path.open("r", encoding="utf-8", newline="") as f: + reader = csv.DictReader(f) + rows = list(reader) + return reader.fieldnames or [], rows, errors + except Exception as exc: + return [], [], [f"CSV read error: {path.relative_to(ROOT)} - {exc}"] + - # Load CSV headers +def _parse_number(value: str | None) -> bool: + if value is None or str(value).strip() == "": + return True try: - with open(csv_path, 'r') as f: - reader = csv.reader(f) - csv_headers = next(reader) - except Exception as e: - errors.append(f"CSV read error: {name}/{plot_config['csv']} - {e}") - return errors, warnings + float(str(value).replace(",", "")) + return True + except ValueError: + return False + + +def _validate_metadata(entry: dict, meta_path: Path, csv_headers: list[str]) -> tuple[list[str], list[str]]: + errors: list[str] = [] + warnings: list[str] = [] + rel_meta = meta_path.relative_to(ROOT) - # Load meta.json try: - with open(meta_path, 'r') as f: - meta = json.load(f) - except Exception as e: - errors.append(f"meta.json parse error: {name}/{plot_config['meta']} - {e}") - return errors, warnings + meta = json.loads(meta_path.read_text(encoding="utf-8")) + except Exception as exc: + return [f"meta.json parse error: {rel_meta} - {exc}"], warnings - # Check required meta.json fields - required_meta_fields = ['title', 'description', 'sources'] - for field in required_meta_fields: + for field in ("title", "description", "sources"): if field not in meta: - errors.append(f"Missing meta.json field '{field}' in {name}/") + errors.append(f"Missing meta.json field '{field}' in {entry['id']}/") - # Check fields vs CSV headers (if 'fields' exists in meta) - if 'fields' in meta: - meta_fields = set(meta['fields'].keys()) - csv_header_set = set(csv_headers) + sources = meta.get("sources", []) + if not isinstance(sources, list) or not sources: + errors.append(f"{entry['id']}: metadata sources must be a non-empty list") + for source in sources: + if not source.get("url"): + errors.append(f"{entry['id']}: source missing url: {source.get('name', '(unnamed)')}") - # Fields in meta but not in CSV + if "fields" in meta: + meta_fields = set(meta["fields"].keys()) + csv_header_set = set(csv_headers) extra_in_meta = meta_fields - csv_header_set - if extra_in_meta: - warnings.append(f"{name}: meta.json fields not in CSV: {extra_in_meta}") - - # Fields in CSV but not in meta (just a warning) extra_in_csv = csv_header_set - meta_fields + if extra_in_meta: + warnings.append(f"{entry['id']}: meta.json fields not in CSV: {sorted(extra_in_meta)}") if extra_in_csv: - warnings.append(f"{name}: CSV headers not in meta.json: {extra_in_csv}") + warnings.append(f"{entry['id']}: CSV headers not in meta.json: {sorted(extra_in_csv)}") + + return errors, warnings + + +def _source_files_for(entry: dict) -> list[Path]: + plot_dir = ROOT / entry["id"] + sources = [entry_path(entry, "data"), entry_path(entry, "metadata")] + src_dir = plot_dir / "src" + if src_dir.exists(): + sources.extend(src_dir.glob("*.py")) + return [path for path in sources if path.exists()] + + +def _outputs_for(entry: dict) -> list[Path]: + keys = ["interactive", "png", "svg"] + return [entry_path(entry, key) for key in keys if entry.get(key)] + + +def _validate_output_freshness(entry: dict) -> list[str]: + warnings: list[str] = [] + outputs = _outputs_for(entry) + sources = _source_files_for(entry) + if not outputs or not sources: + return warnings + + newest_source = max(path.stat().st_mtime for path in sources) + for output in outputs: + if output.exists() and output.stat().st_mtime < newest_source: + warnings.append( + f"{entry['id']}: output older than source/data: {output.relative_to(ROOT)}" + ) + return warnings + + +def _numeric_columns(headers: set[str], required_config: dict | None) -> set[str]: + numeric = set(required_config.get("numeric", set())) if required_config else set() + for header in headers: + normalized = header.lower() + if any(pattern in normalized for pattern in GENERIC_NUMERIC_PATTERNS): + numeric.add(header) + return numeric + + +def _validate_csv_schema(entry: dict, headers: list[str], rows: list[dict]) -> tuple[list[str], list[str]]: + errors: list[str] = [] + warnings: list[str] = [] + header_set = set(headers) + schema = SCHEMA_REQUIREMENTS.get(entry["id"]) + + if len(rows) < 2: + errors.append(f"{entry['id']}: CSV must contain at least 2 data rows") + + if schema: + missing = schema["required"] - header_set + if missing: + errors.append(f"{entry['id']}: CSV missing required columns: {sorted(missing)}") + + for column in _numeric_columns(header_set, schema): + if column not in header_set: + continue + for row_num, row in enumerate(rows, start=2): + if not _parse_number(row.get(column)): + errors.append(f"{entry['id']}: non-numeric value in {column} at row {row_num}") + break + + source_column = "source_id" if "source_id" in header_set else None + status_column = "estimate_status" if "estimate_status" in header_set else None + if source_column: + for row_num, row in enumerate(rows, start=2): + status = (row.get(status_column) or "").strip().lower() + if status not in {"speculative", "projection"} and not (row.get(source_column) or "").strip(): + errors.append(f"{entry['id']}: source_id required for non-speculative row {row_num}") + break return errors, warnings -def main(): +def validate_manifest() -> tuple[list[str], list[str]]: + errors: list[str] = [] + warnings: list[str] = [] + + try: + entries = load_manifest(ROOT) + except Exception as exc: + return [f"Could not load plots_manifest.json: {exc}"], warnings + + seen_ids: set[str] = set() + seen_orders: set[int] = set() + for entry in entries: + missing = REQUIRED_MANIFEST_FIELDS - set(entry) + if missing: + errors.append(f"Manifest entry missing fields: {entry.get('id', '(missing id)')} {sorted(missing)}") + if entry.get("id") in seen_ids: + errors.append(f"Duplicate manifest id: {entry.get('id')}") + seen_ids.add(entry.get("id")) + if entry.get("order") in seen_orders: + errors.append(f"Duplicate manifest order: {entry.get('order')}") + seen_orders.add(entry.get("order")) + if entry.get("status") not in ALLOWED_STATUSES: + errors.append(f"{entry.get('id')}: invalid status {entry.get('status')}") + if entry.get("confidence") not in ALLOWED_CONFIDENCE: + errors.append(f"{entry.get('id')}: invalid confidence {entry.get('confidence')}") + + return errors, warnings + + +def validate_plot(plot_config: dict) -> tuple[list[str], list[str]]: + """Validate a single plot manifest entry. Kept for test compatibility.""" + entry = plot_config + errors: list[str] = [] + warnings: list[str] = [] + + plot_dir = ROOT / entry["id"] + if not plot_dir.is_dir(): + errors.append(f"Directory not found: {entry['id']}/") + return errors, warnings + + required_files = [entry.get("interactive"), entry.get("png"), entry.get("svg"), entry.get("data"), entry.get("metadata"), entry.get("readme"), f"{entry['id']}/index.html"] + for rel in [item for item in required_files if item]: + if not (ROOT / rel).is_file(): + errors.append(f"Missing: {rel}") + + csv_path = entry_path(entry, "data") + headers, rows, csv_errors = _read_csv(csv_path) + errors.extend(csv_errors) + if not csv_errors: + csv_schema_errors, csv_schema_warnings = _validate_csv_schema(entry, headers, rows) + errors.extend(csv_schema_errors) + warnings.extend(csv_schema_warnings) + + meta_path = entry_path(entry, "metadata") + if meta_path.is_file() and headers: + meta_errors, meta_warnings = _validate_metadata(entry, meta_path, headers) + errors.extend(meta_errors) + warnings.extend(meta_warnings) + + index_path = plot_dir / "index.html" + if index_path.exists(): + parser = ImageAltParser() + parser.feed(index_path.read_text(encoding="utf-8")) + errors.extend(f"{entry['id']}: {err}" for err in parser.errors) + + warnings.extend(_validate_output_freshness(entry)) + return errors, warnings + + +def validate_homepage_and_readme() -> tuple[list[str], list[str]]: + errors: list[str] = [] + warnings: list[str] = [] + published = published_entries(ROOT) + published_count = len(published) + + index_path = ROOT / "index.html" + readme_path = ROOT / "README.md" + index_html = index_path.read_text(encoding="utf-8") if index_path.exists() else "" + readme = readme_path.read_text(encoding="utf-8") if readme_path.exists() else "" + + match = re.search(r'data-published-count="(\d+)"', index_html) + if not match: + errors.append("Homepage missing data-published-count marker") + elif int(match.group(1)) != published_count: + errors.append(f"Homepage count {match.group(1)} does not match published manifest count {published_count}") + + for entry in published: + if entry["title"] not in index_html: + errors.append(f"Homepage missing published entry title: {entry['title']}") + if entry["title"] not in readme: + errors.append(f"README missing published entry title: {entry['title']}") + + parser = ImageAltParser() + parser.feed(index_html) + errors.extend(f"Homepage: {err}" for err in parser.errors) + return errors, warnings + + +PLOTS = [ + { + "name": entry["id"], + "csv": str(Path(entry["data"]).relative_to(entry["id"])), + "meta": str(Path(entry["metadata"]).relative_to(entry["id"])), + "required_files": [ + str(Path(path).relative_to(entry["id"])) + for path in (entry["interactive"], entry["png"], entry["svg"], f"{entry['id']}/index.html") + if path + ], + **entry, + } + for entry in plot_entries(ROOT, published_only=True) +] + + +def main() -> None: print("=" * 60) print("Plots Validation") print("=" * 60) - all_errors = [] - all_warnings = [] + all_errors: list[str] = [] + all_warnings: list[str] = [] - for plot_config in PLOTS: - name = plot_config['name'] - print(f"\nValidating: {name}/") - errors, warnings = validate_plot(plot_config) + manifest_errors, manifest_warnings = validate_manifest() + all_errors.extend(manifest_errors) + all_warnings.extend(manifest_warnings) + print("\nValidating: plots_manifest.json") + if manifest_errors or manifest_warnings: + for error in manifest_errors: + print(f" ERROR: {error}") + for warning in manifest_warnings: + print(f" WARN: {warning}") + else: + print(" OK") + for entry in plot_entries(ROOT, published_only=True): + print(f"\nValidating: {entry['id']}/") + errors, warnings = validate_plot(entry) + all_errors.extend(errors) + all_warnings.extend(warnings) if errors: - all_errors.extend(errors) - for e in errors: - print(f" ERROR: {e}") + for error in errors: + print(f" ERROR: {error}") if warnings: - all_warnings.extend(warnings) - for w in warnings: - print(f" WARN: {w}") + for warning in warnings: + print(f" WARN: {warning}") if not errors and not warnings: - print(f" OK") + print(" OK") + + print("\nValidating: homepage and README") + errors, warnings = validate_homepage_and_readme() + all_errors.extend(errors) + all_warnings.extend(warnings) + if errors: + for error in errors: + print(f" ERROR: {error}") + if warnings: + for warning in warnings: + print(f" WARN: {warning}") + if not errors and not warnings: + print(" OK") print("\n" + "=" * 60) print(f"Summary: {len(all_errors)} errors, {len(all_warnings)} warnings") @@ -192,10 +393,9 @@ def main(): if all_errors: sys.exit(1) - else: - print("\nAll validations passed!") - sys.exit(0) + print("\nAll validations passed!") + sys.exit(0) -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/shared/site.css b/shared/site.css index 9c70b4c..cf7a36e 100644 --- a/shared/site.css +++ b/shared/site.css @@ -65,6 +65,25 @@ h1 { color: var(--color-text); } +.hero { + padding: 1rem 0 0.5rem; +} + +.hero h1 { + font-size: 2.6rem; + line-height: 1.08; + margin: 0.2rem 0 0.75rem; +} + +.eyebrow, +.card-kicker { + color: var(--color-text-muted); + font-size: 0.78rem; + font-weight: 700; + letter-spacing: 0; + text-transform: uppercase; +} + .subtitle { font-size: 1rem; color: var(--color-text-muted); @@ -179,7 +198,7 @@ footer a:hover { .card { background: var(--color-surface); - border-radius: 14px; + border-radius: var(--radius); box-shadow: var(--shadow); overflow: hidden; transition: transform 0.2s, box-shadow 0.2s; @@ -210,10 +229,27 @@ footer a:hover { .card img { width: 100%; - height: auto; + aspect-ratio: 16 / 10; + object-fit: cover; display: block; } +.card-media { + display: block; + color: inherit; + text-decoration: none; +} + +.dashboard-preview { + aspect-ratio: 16 / 10; + display: flex; + align-items: center; + justify-content: center; + background: linear-gradient(135deg, #111827, #253044); + color: #e5e7eb; + font-weight: 700; +} + .card-content { padding: 1.25rem; } @@ -248,6 +284,22 @@ footer a:hover { margin-top: 2px; } +.badges { + display: flex; + flex-wrap: wrap; + gap: 0.4rem; + margin: 0.4rem 0 0.6rem; +} + +.badge { + border: 1px solid var(--color-border); + border-radius: 999px; + color: var(--color-text-muted); + font-size: 0.72rem; + line-height: 1; + padding: 0.28rem 0.48rem; +} + .card .links { padding: 0; background: none; @@ -308,6 +360,40 @@ footer a:hover { margin-left: 0.5rem; } +.thesis { + background: var(--color-surface); + border: 1px solid var(--color-border); + border-radius: var(--radius); + padding: 1.5rem; + margin: 1.5rem 0; +} + +.thesis h2 { + margin-top: 0; +} + +.thesis-flow { + display: grid; + grid-template-columns: repeat(6, minmax(0, 1fr)); + gap: 0.5rem; + margin: 1rem 0; + align-items: stretch; +} + +.thesis-flow span, +.thesis-flow strong { + border: 1px solid var(--color-border); + border-radius: var(--radius); + padding: 0.75rem; + text-align: center; + background: var(--color-bg); +} + +.thesis-flow strong { + background: #111827; + color: white; +} + /* Why section */ .why-section { background: var(--color-surface); @@ -339,6 +425,14 @@ footer a:hover { font-size: 1.4rem; } + .hero h1 { + font-size: 2rem; + } + + .thesis-flow { + grid-template-columns: 1fr 1fr; + } + .links { flex-direction: column; gap: 0.5rem; diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..495a48c --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,47 @@ + + + + https://mschwar.github.io/plots/ + 2026-04-24 + + + https://mschwar.github.io/plots/plots_manifest.json + 2026-04-24 + + + https://mschwar.github.io/plots/ai-compute-timeline/output/ai_compute_timeline_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/adoption-timeline/output/adoption_timeline_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/energetic-scaling/output/energetic_scaling_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/civilization-scaling/output/civilization_scaling_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/energy-leverage-per-person/output/energy_leverage_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/model-sizes/output/model_sizes_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/ai-benchmark-progress/output/benchmark_progress_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/cost-to-train/output/cost_to_train_interactive.html + 2026-04-24 + + + https://mschwar.github.io/plots/dashboard/index.html + 2026-04-24 + + diff --git a/tests/test_shared_utils.py b/tests/test_shared_utils.py index 55801a5..2e7fe42 100644 --- a/tests/test_shared_utils.py +++ b/tests/test_shared_utils.py @@ -18,5 +18,17 @@ def test_csv_headers(self, repo_root): reader = csv.reader(f) headers = next(reader) - expected = ["Year", "Event", "Category", "Days_to_Adoption", "Impact"] - assert headers == expected, f"Unexpected headers: {headers}" + expected_required = { + "Year", + "Event", + "Category", + "Days_to_Adoption", + "Impact", + "adoption_metric_type", + "comparability_level", + "source_id", + "confidence", + "comparability_notes", + "notes", + } + assert expected_required.issubset(set(headers)), f"Unexpected headers: {headers}"