Skip to content

Add train metrics in Lightning Pipeline output#16

Merged
JeremieGince merged 1 commit intodevfrom
add_train_validation_metrics
Feb 9, 2026
Merged

Add train metrics in Lightning Pipeline output#16
JeremieGince merged 1 commit intodevfrom
add_train_validation_metrics

Conversation

@JeremieGince
Copy link
Contributor

@JeremieGince JeremieGince commented Feb 9, 2026

Description

This pull request updates the training and validation metrics handling in the lightning_pipeline.py pipeline, ensuring both train and validation metrics are saved and returned, and adds corresponding test assertions. The changes improve clarity and completeness of reported metrics after running the pipeline.

Pipeline metrics handling improvements:

  • Modified the run method in lightning_pipeline.py to separately compute and save both train and validation metrics, and return a merged dictionary containing both sets of metrics.
  • Added a new run_train_validation method to evaluate metrics on the training set, save them, and ensure metric keys are renamed from val_ to train_ for clarity.
  • Updated the run_validation method to save validation metrics to the checkpoint folder after computation.

Testing improvements:

  • Enhanced the test_run_and_run_test test to assert that the output of run() includes both val_loss and train_loss, and that test_loss is only present after calling run_test().

Checklist

Please complete the following checklist when submitting a PR. The PR will not be reviewed until all items are checked.

  • All new features include a unit test.
    Make sure that the tests passed and the coverage is
    sufficient by running pytest tests --cov=src --cov-report=term-missing.
  • All new functions and code are clearly documented.
  • The code is formatted using Black.
    You can do this by running black src tests.
  • The imports are sorted using isort.
    You can do this by running isort src tests.
  • The code is type-checked using Mypy.
    You can do this by running mypy src tests.

Introduce run_train_validation to validate on the training dataloader (tries ckpt_path='best' then falls back to 'last'), measure train_validation_time, prefix returned metrics with 'train_', and save them to the checkpoint folder. Update run() to call run_train_validation, record overall training_time, save train metrics, then run and merge validation metrics. Also ensure run_validation saves validation metrics. Update tests to assert presence of train/val/test metrics accordingly.
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

☂️ Python Coverage

current status: ✅

Overall Coverage

Lines Covered Coverage Threshold Status
899 873 97% 90% 🟢

New Files

No new covered files...

Modified Files

File Coverage Status
src/matchcake_opt/tr_pipeline/lightning_pipeline.py 96% 🟢
TOTAL 96% 🟢

updated for commit: 8e82e7e by action🐍

@JeremieGince JeremieGince merged commit 92b8e1d into dev Feb 9, 2026
6 checks passed
@JeremieGince JeremieGince deleted the add_train_validation_metrics branch February 9, 2026 15:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant