Skip to content

Possible for use with streaming/realtime input? #37

@jounih

Description

@jounih

First of all, thank you for your work, it is very impressive.

I would like to stream abc (or midi) melody from a midi instrument, and have the model generate a coherent harmony, or counterpoint melody in the specified/trained style, in real-time (or close enough to real-time for live performance, up to 100ms latency).

For example, to generate a jazz piano harmony/accompaniment based on a monophonic input melody from a human musician, in real-time.

I can get the training data for specific styles and fine-tune, but just wondering if the model is set up for a task like this or could be tweaked to do it - is this possible?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions