Skip to content

Dimension mismatch while loading model from checkpoint #5

@Rajrup

Description

@Rajrup

Thanks for sharing this great work!

I am currently hitting an issue while running the evaluation for the pointgroup detector using the checkpoint file you shared.
python scripts/eval.py --folder <output_folder> --task detection

Output:
Traceback (most recent call last):
File "scripts/eval.py", line 522, in
model = init_model(cfg, dataset)
File "scripts/eval.py", line 121, in init_model
model.load_state_dict(checkpoint["state_dict"], strict=False)
File "/home/rajrup/miniconda3/envs/d3net-original/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PipelineNet:
size mismatch for embeddings: copying a param with shape torch.Size([3441, 300]) from checkpoint, the shape in current model is torch.Size([3535, 300]).
size mismatch for speaker.caption.embeddings: copying a param with shape torch.Size([3441, 300]) from checkpoint, the shape in current model is torch.Size([3535, 300]).
size mismatch for speaker.caption.classifier.2.weight: copying a param with shape torch.Size([3441, 512]) from checkpoint, the shape in current model is torch.Size([3535, 512]).
size mismatch for speaker.caption.classifier.2.bias: copying a param with shape torch.Size([3441]) from checkpoint, the shape in current model is torch.Size([3535]).

The dimension of the tensors in checkpoint doesn't match the one required in the code. Before the model load step, the val splits, and the vocabulary loads fine. I might be missing something here. Can you please help me solve this issue?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions