Skip to content

Hardware Integration Plan #57

@TGALLOWAY1

Description

@TGALLOWAY1

Title

PushFlow Practice Workspace — Melodics-Style Performance Training for Ableton Push

Purpose

Implement a new Practice Workspace for PushFlow that allows a user to practice a selected candidate layout using a real Ableton Push, hear mapped audio when pressing pads, see a large timing-first practice timeline, view a synced supporting Push grid, and compare their actual MIDI performance against the intended performance.

This feature should follow a Melodics-style practice loop while remaining faithful to PushFlow’s core identity: layout + execution + candidate comparison + human-in-the-loop evaluation.

This is not just a visual practice mode. The first implementation must prove the hardware loop, prove the core timing/comparison loop, and produce a usable practice experience.

Product Direction
Chosen direction

Build Option B: Melodics-style Practice Workspace, with these explicit requirements included in v1:

validate real Ableton Push hardware integration

prove real-time audio preview triggered by MIDI

use a dual view practice interface

make the timeline the primary surface

make the grid a synced supporting surface

include Wait Mode in v1

include candidate layout comparison in v1

UX principle

During active practice, the screen should simplify aggressively so that the user mainly sees:

a large scrolling timeline

a supporting synced Push grid

target events approaching the playhead

active pads becoming visible as the corresponding events approach

Everything else should be minimized, hidden, or moved into secondary controls.

Core User Story

A user should be able to:

select a phrase/section and a candidate layout

connect an Ableton Push as MIDI input

press a pad and hear the mapped sound with low enough latency to feel usable

enter a practice session

watch target events approach on a large timeline

see the corresponding pads on the Push grid become visible/highlighted as those events approach

perform the phrase using the hardware

see their actual MIDI compared against the intended performance

retry the phrase, slow it down, loop it, or use Wait Mode

switch to another candidate layout and compare which one they actually perform better on

Scope for v1
In scope

real Ableton Push MIDI input

MIDI event capture with timestamps

real-time mapped audio preview triggered by pad input

low-latency hardware/audio test path

target timeline generation from selected candidate

large practice timeline UI

synced Push grid UI

approaching-event visualization

target-vs-played comparison engine

strict pad matching for v1

simultaneity/chord-aware support

loop region practice

BPM adjustment

count-in

retry/restart

Wait Mode

practice attempt summary

candidate layout comparison based on actual performance results

saved latency calibration per device profile

Explicitly deferred

finger-aware scoring

detailed hand/finger correctness validation

adaptive curriculum

XP/streak/badges

auto-BPM ramping

deep lesson-library system

advanced expressive timing interpretation

complex phrase inference matching

advanced DSP/effects in the practice audio engine

timeline clutter from diagnostics during active practice

Canon-Aligned Design Principles

The implementation should remain consistent with PushFlow’s existing conceptual model:

candidate layouts remain first-class

execution remains temporal, not purely static

practice is derived from a selected candidate solution

multiple plausible candidate solutions must remain comparable

the system should help the user evaluate which candidate is actually performable, not just analytically attractive

active practice UI should emphasize what is coming, what to hit, and how close the user was

This feature should extend PushFlow, not replace its core identity with a generic rhythm game.

New Practice Concepts

Add a practice-layer model that sits cleanly on top of existing PushFlow concepts.

PracticeSession

A configured practice instance.

Suggested fields:

sessionId

projectId

candidateId

sectionId

loopStartBeat

loopEndBeat

tempoBpm

countInEnabled

waitModeEnabled

metronomeEnabled

latencyProfileId

PracticeTarget

The resolved target performance for the selected candidate and phrase.

Suggested fields:

targetId

candidateId

sectionId

events[]

lanes[]

simultaneityGroups[]

startBeat

endBeat

PracticeTargetEvent

Resolved practice event.

Suggested fields:

eventId

beatTime

durationBeats

soundId

padId

laneId

groupId

velocity

fingerAssignmentId? (future hook only)

PracticeAttempt

A single performed run by the user.

Suggested fields:

attemptId

sessionId

candidateId

capturedEvents[]

startedAt

completedAt

summary

comparisonResult

CapturedInputEvent

Raw or normalized performed MIDI event.

Suggested fields:

capturedEventId

rawTimestampMs

adjustedTimestampMs

beatTime

midiNote

velocity

padId

eventType

PracticeComparisonResult

Matching and judgment output.

Suggested fields:

matchedEvents[]

missedTargets[]

extraPlayedEvents[]

summaryMetrics

hardestPassages[]

DeviceLatencyProfile

Saved calibration profile for a device.

Suggested fields:

profileId

deviceName

deviceIdentifier

inputOffsetMs

outputOffsetMs

judgmentOffsetMs

updatedAt

UX Design
Practice Workspace Modes

  1. Setup mode

Used before practice begins.

Visible:

candidate selector

section selector

MIDI device selection

calibration status

tempo

loop controls

Wait Mode toggle

count-in toggle

start button

  1. Active practice mode

This is the primary experience.

Visible:

large scrolling timeline

synced supporting Push grid

loop region

count-in / playhead / wait cursor

compact practice controls

compact run status

Minimized or hidden:

cost breakdowns

layout editing controls

verbose panel chrome

detailed diagnostics

unrelated workspace panels

  1. Review mode

Visible after an attempt.

Visible:

summary metrics

target-vs-played result overview

hardest phrase markers

retry CTA

switch candidate CTA

compare candidate results CTA

Active Practice UI Requirements
Timeline

The timeline is the primary surface.

It should:

dominate the workspace visually

use per-sound or per-lane rows

show target events approaching the playhead

show played events when input is received

show early/late offset visually

show misses and extras

show loop bounds clearly

support count-in

support paused/waiting state in Wait Mode

Timeline behavior

target events should scroll toward a stable play point or playhead reference

events must stay visually synced with the practice clock

replay/retry should reset cleanly

loop restarts should be smooth and predictable

Supporting Push grid

The grid is secondary but synchronized.

It should:

always reflect the selected candidate layout

highlight pads for upcoming events

intensify highlight as target events approach

indicate the current required pad(s)

flash pads the user actually played

support simultaneous targets clearly

remain legible without stealing focus from the timeline

Grid behavior

“approaching visibility” should be tied to the practice timeline

current target pads should be most visually prominent

played pads should show a short-lived feedback state

correct vs incorrect feedback should be lightweight, not noisy

Wait Mode Requirements

Wait Mode is required in v1.

Behavior

When the next target event reaches the play point:

timeline motion pauses

target pad or pad group remains highlighted

grid emphasis narrows to the required pad(s)

the user must play the correct pad or simultaneity group

once correct input is detected, progression resumes

v1 simplification

In Wait Mode:

validate correct pad/group only

do not evaluate timing during the paused period

do not require finger-aware validation

do not attempt advanced phrase inference

Notes

Wait Mode should feel like a deliberate-practice tool, not a broken transport state.

Candidate Comparison Requirements

Candidate comparison is required in v1.

A user should be able to practice the same phrase using multiple candidate layouts and compare actual performance results.

Compare metrics

For each candidate:

overall accuracy

timing deviation

missed note count

wrong-pad count

attempt consistency across retries

hardest passage markers

UX expectation

The comparison should help answer:

Which candidate is easier for me to actually perform?

Which candidate produces fewer wrong-pad errors?

Which candidate produces more stable timing?

Which candidate becomes easier after a few attempts?

This is a key PushFlow differentiator and should be treated as a first-class outcome of the Practice Workspace.

Architecture Overview

The implementation should be split into clear, testable slices.

Slice 1 — MIDI and timing

Responsibilities:

MIDI device enumeration

device selection

input capture

timestamp normalization

transport/session timing coordination

reconnect handling

Suggested modules:

MidiInputService

SessionClockService

PadResolver

DeviceConnectionState

Slice 2 — Latency calibration

Responsibilities:

device-specific offset storage

calibration workflow

offset application

manual tuning UI

stable persisted profiles

Suggested modules:

LatencyCalibrationService

DeviceLatencyProfileStore

Slice 3 — Audio preview engine

Responsibilities:

sample loading

decoded buffer management

per-pad playback

overlapping trigger support

low-latency preview path

Suggested modules:

AudioPreviewEngine

SampleBufferStore

PadAudioRouter

Slice 4 — Practice target builder

Responsibilities:

resolve candidate into practice-ready target data

build lane data

build simultaneity groups

build grid-highlight timing windows

provide loop/section extraction

Suggested modules:

PracticeTargetBuilder

PracticeLaneBuilder

SimultaneityResolver

Slice 5 — Comparison engine

Responsibilities:

compare target vs played events

match events using strict pad-based matching

handle simultaneity groups

compute timing judgments

compute summaries

detect hardest passages

Suggested modules:

PracticeComparisonEngine

EventMatcher

JudgmentEngine

PassageDifficultyAnalyzer

Slice 6 — Practice UI

Responsibilities:

setup mode

active practice mode

review mode

timeline rendering

grid synchronization

run controls

candidate comparison display

Suggested modules:

PracticeWorkspace

PracticeTimeline

PracticeGrid

PracticeControls

AttemptSummaryPanel

CandidateComparisonPanel

Recommended Milestone Sequence
Milestone 1 — Prove hardware integration

Goal: prove that PushFlow can receive MIDI from Ableton Push and map it into the grid/layout system correctly.

Deliverables

device detection UI

connect/select Push device

MIDI input log

note-on/note-off capture

grid response to live pad presses

reconnect/disconnect handling

Acceptance criteria

Push device is detected reliably

pad presses generate stable events

grid highlights correct mapped pad positions

reconnecting does not require an app restart

no ghost double-triggering in normal use

Test cases

connect device after app already open

open app with device already connected

disconnect/reconnect

rapid repeated taps

simultaneous pad presses

held pads and repeated taps

Milestone 2 — Prove low-latency audio preview

Goal: prove that pressing a Push pad triggers the mapped sound responsively enough to feel instrument-like.

Deliverables

audio preview engine

one-shot sample playback

per-pad audio routing

overlapping playback support

audio preview test screen or test mode

Acceptance criteria

pressing a pad triggers the correct assigned sound

fast repeated triggers do not break playback

simultaneous triggers are supported

switching candidate layouts updates audio mapping correctly

playback feels responsive enough for practice

v1 constraints

Start simple:

decoded one-shot sample buffers in memory

no time-stretching

no complex streaming model

no advanced FX pipeline

Milestone 3 — Prove timing and latency calibration

Goal: establish a trustworthy timing model for target-vs-played comparison.

Deliverables

session timing clock

calibrated input/output/judgment offsets

latency profile persistence

simple calibration UI

manual offset nudge controls

Acceptance criteria

user can calibrate once and save settings for a device

repeated metronome-aligned taps cluster reasonably after offset correction

perceived and judged timing feel aligned enough to trust comparisons

Recommended v1 calibration flow
Manual-guided flow

play metronome

ask user to tap a target repeatedly

estimate median offset

allow manual fine-tuning

Stored per-device profile

device name

input offset

output offset

judgment offset

Milestone 4 — Build practice target generation

Goal: convert a selected candidate and phrase into a practice-ready target.

Deliverables

target event extraction

per-lane timeline data

simultaneity groups

loop range support

grid highlight schedule

Acceptance criteria

each target event resolves to a correct pad and lane

simultaneous notes are grouped correctly

loop extraction produces stable repeatable target data

target data rebuilds cleanly when switching candidate layouts

Milestone 5 — Build the active practice loop

Goal: make the feature feel like a real product.

Deliverables

large practice timeline

synced supporting grid

count-in

play/restart

loop playback

live event capture

target-vs-played rendering

compact run controls

Acceptance criteria

user can start a practice run and see upcoming targets

user sees corresponding grid pads highlighted as events approach

played events render cleanly against targets

retrying and looping are smooth

active practice mode stays visually focused and uncluttered

Milestone 6 — Implement comparison engine

Goal: produce trustworthy judgments and summaries.

Deliverables

event matcher

timing judgment windows

wrong-pad detection

missed-target detection

extra-note detection

summary metrics

hardest-passage detection

v1 matching rules

Use:

strict pad matching

nearest valid target within timing window

simultaneity-aware group handling

Do not use:

advanced phrase inference

loose harmonic substitution logic

expressive-performance heuristics

Acceptance criteria

correct notes within the window are judged appropriately

wrong-pad hits are detected

misses and extras are surfaced correctly

result summaries are stable across repeated attempts

Milestone 7 — Implement Wait Mode

Goal: support deliberate-practice progression.

Deliverables

Wait Mode toggle

pause-at-target behavior

required pad/group highlighting

resume-on-correct-input behavior

Acceptance criteria

timeline pauses when the next required target reaches the play point

correct pad/group resumes progression

wrong inputs do not progress the run

user can use Wait Mode to learn placement without timing pressure

Milestone 8 — Implement attempt summary and candidate comparison

Goal: make the practice results useful and PushFlow-native.

Deliverables

attempt summary panel

compare-attempts-by-candidate workflow

hardest-passage markers

switch-candidate CTA

side-by-side metrics

Acceptance criteria

user can practice the same phrase on multiple candidate layouts

user can compare results meaningfully

summary helps user choose which candidate is easier to perform

comparison remains tied to the same phrase/section for validity

Data Flow
High-level flow

user selects candidate + phrase

PracticeTargetBuilder resolves the target

user selects or confirms MIDI device

latency profile is loaded

practice session begins

timeline advances according to session clock

grid highlights upcoming/current targets

user presses pads on Push

MIDI events are captured and normalized

audio preview triggers mapped sounds

captured input is compared to target

judgments render in real time or near-real time

attempt summary is computed

user retries or switches candidate

Comparison Logic
Input

target event sequence

performed event sequence

latency-adjusted timestamps

practice session timing data

configured judgment windows

Output categories

correct + perfect

correct + early

correct + late

wrong pad

missed target

extra played event

Suggested judgment model

Make windows configurable:

perfectWindowMs

goodWindowMs

acceptableWindowMs

missWindowMs

For v1, keep scoring easy to explain.

Testing Plan

  1. MIDI hardware tests

detect Push device consistently

verify note-on/note-off events

verify correct pad mapping on grid

verify simultaneous inputs

verify reconnection handling

  1. Audio preview tests

correct sound per mapped pad

rapid retrigger stability

overlapping playback

candidate-switch remapping

no stale audio routing

  1. Timing and calibration tests

calibration save/load

manual offset adjustments

repeated tapping against metronome

comparison consistency before/after offset

  1. Practice target tests

lane assignment correctness

simultaneity group correctness

loop extraction correctness

candidate-switch rebuild correctness

  1. Comparison engine tests

correct hit matching

wrong-pad detection

miss detection

extra-note detection

simultaneity handling

hardest-passage extraction

  1. UI/UX tests

active mode visual focus

timeline/grid sync

smooth retry behavior

Wait Mode usability

clear post-run summary

candidate comparison usability

Non-Negotiable UX Rules

During active practice, the timeline is the primary surface.

The grid must remain synchronized but should not dominate the screen.

Upcoming pads should become visible/highlighted as events approach.

The user should always be able to tell:

what is coming

what they should hit

how close they were

Practice mode should not be cluttered with unrelated analytics.

Candidate comparison should be tied to real performed results, not just static cost metrics.

Risks and Mitigations
Risk 1 — Latency makes the feature feel fake

If audio preview feels delayed, the entire practice mode loses credibility.

Mitigation

Treat hardware/audio validation as a milestone, not an implementation detail.

Risk 2 — Browser timing jitter creates unreliable scoring

If timestamps are unstable, users will not trust the comparison engine.

Mitigation

Use a session timing base, calibration offsets, and configurable judgment windows.

Risk 3 — Active practice UI becomes too busy

If too much PushFlow chrome leaks into practice mode, the learning loop will feel unfocused.

Mitigation

Create explicit setup, active, and review modes.

Risk 4 — Candidate switching introduces stale mappings

Audio, grid, and timeline could desynchronize when changing candidate.

Mitigation

Rebuild a single resolved PracticeTarget object each time the candidate changes.

Risk 5 — Comparison engine gets too clever too early

Trying to support every musical nuance in v1 may stall delivery.

Mitigation

Use strict pad matching and simultaneity support only for v1.

Strong v1 Acceptance Checklist

The feature is considered a successful v1 if all of the following are true:

Ableton Push connects reliably

live pad presses are captured and visualized correctly

mapped sounds trigger responsively enough for practice

the timeline and grid remain synchronized

target-vs-played comparison feels trustworthy

looped phrase practice works smoothly

Wait Mode works cleanly

the user can retry a phrase quickly

the user can practice two candidate layouts back-to-back

the user can tell which candidate they actually perform better on

Suggested Initial File/Module Planning

Use names as guidance, not strict requirements.

Services

MidiInputService.ts

SessionClockService.ts

LatencyCalibrationService.ts

AudioPreviewEngine.ts

PracticeTargetBuilder.ts

PracticeComparisonEngine.ts

State

practiceSessionStore.ts

deviceLatencyProfileStore.ts

practiceAttemptStore.ts

UI

PracticeWorkspace.tsx

PracticeTimeline.tsx

PracticeGrid.tsx

PracticeControls.tsx

WaitModeIndicator.tsx

AttemptSummaryPanel.tsx

CandidateComparisonPanel.tsx

Test helpers

midiTestHarness.ts

practiceTargetFixtures.ts

comparisonEngineFixtures.ts

Implementation Notes for Claude Code
Important

Do not attempt to fully “game-ify” the practice experience in this pass.

Focus on this order

prove hardware integration

prove audio preview loop

prove calibration/timing trustworthiness

prove target-vs-played comparison

build the focused active-practice UI

add Wait Mode

add candidate comparison workflow

Avoid

collapsing this into existing cluttered workspace panels

burying practice under excessive side UI

implementing finger-aware scoring in v1

overfitting the matcher to edge cases before the basic loop works

treating hardware validation as optional

Final Build Target

The final intended outcome of this implementation phase is:

A user can select a PushFlow candidate layout, connect Ableton Push, hear mapped sounds when pressing pads, watch a large timing-focused practice timeline with a synced grid, perform a phrase, see how their actual MIDI compares against the intended performance, use Wait Mode to learn difficult passages, and compare which candidate layout they perform best in practice.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions