Skip to content

Multi-model inference #500

@budbuddy

Description

@budbuddy

Hello,

One of the main use cases when deploying real-world models is to train several "expert models" on different subtasks of the main inference task, then using some linear combination of them to do inference.

Example:

  • Model1 learns to predict clicks on an item
  • Model2 learns to predict purchases of an item

Main model = amodel1 + bmodel2

Here the task is separated into two subtasks, and the model used to give recommendations is a linear combination of the two.

Do you have any plans to support this in the library or should it be done case by case for each project? I'm not event sure if there's a clean way to do multi-model inference, especially given that the infer function on the different types of models you offer can work in very different ways.
For example I think it's very easy to do for Two Tower, you can pretty much just add the models together and get the expected output, but things get a bit more tricky if you add a model like DIN to the mix.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions