Skip to content

Unable to reproduce ACDC/Synapse results - seeking guidance #29

@KyriakiKolpetinou

Description

@KyriakiKolpetinou

Hi,

I've been trying to reproduce the results reported in your paper for the ACDC and Synapse datasets using the nnFormer training pipeline as described, but I'm getting significantly lower Dice scores. My setup:

  • nnFormer repo
  • Your SegFormer3D baseline integrated into nnFormer
  • GPU: nvidia A100
  • Patch size used: [14, 160, 160] for ACDC (from nnFormer plans) ((64,128,128) for Synapse)
  • I have altered the Segformer network to accept non cubic inputs.
  • I've also tried adamW, warmp and scheduling in place of SGD nnFormer uses but still cant get the correct results.

for example for ACDC I'm getting:
Dice_rv 0.8156461043024315
Dice_myo 0.8140759288442456
Dice_lv 0.9206212720308106

which differ a lot from those on the paper. Maybe I'm missing something.

So my question is,
Are there any specific hyperparameters or training configurations beyond the default nnFormer settings that you used? Are the patch sizes im using wrong maybe?

Thank you in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions