Hello, and thank you for sharing this impressive work.
I really enjoyed reading your paper, and I was especially impressed by the idea and the strong performance of SparseDriveV2. The overall design and results are truly remarkable.
I was wondering if you could share a bit more information about the training time or overall computational cost. In the paper, the training configuration such as GPU count, batch size, and number of epochs is described, but I could not find details about the actual wall-clock training time.
If possible, could you let me know approximately:
- how long training took for the NAVSIM setting,
- how long training took for the Bench2Drive setting, and
- whether there were any particularly expensive parts of training in practice. (evaluating pdm score at training time, maybe)
Thank you again for your excellent work.
Best regards
Hello, and thank you for sharing this impressive work.
I really enjoyed reading your paper, and I was especially impressed by the idea and the strong performance of SparseDriveV2. The overall design and results are truly remarkable.
I was wondering if you could share a bit more information about the training time or overall computational cost. In the paper, the training configuration such as GPU count, batch size, and number of epochs is described, but I could not find details about the actual wall-clock training time.
If possible, could you let me know approximately:
Thank you again for your excellent work.
Best regards