Skip to content

[ICCV2025] CoopTrack: Exploring End-to-End Learning for Efficient Cooperative Sequential Perception

Notifications You must be signed in to change notification settings

AIR-THU/CoopTrack

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoopTrack: Exploring End-to-End Learning for Efficient Cooperative Sequential Perception

Jiaru Zhong, Jiahao Wang, Jiahui Xu, Xiaofan Li, Zaiqing Nie*, Haibao Yu*

CoopTrack Weights 

News

  • August 31, 2025: The code and model have been open-sourced.
  • July 25, 2025: CoopTrack is available at arXiv now. And CoopTrack is selected as Highlight.
  • June 26, 2025: CoopTrack has been accepted by ICCV 2025! We will release our paper and code soon!

Table of Contents

Introduction

Cooperative perception aims to address the inherent limitations of single-vehicle autonomous driving systems through information exchange among multiple agents. Previous research has primarily focused on single-frame perception tasks. However, the more challenging cooperative sequential perception tasks, such as cooperative 3D multi-object tracking, have not been thoroughly investigated. Therefore, we propose CoopTrack, a fully instance-level end-to-end framework for cooperative tracking, featuring learnable instance association, which fundamentally differs from existing approaches. CoopTrack transmits sparse instance-level features that significantly enhance perception capabilities while maintaining low transmission costs. Furthermore, the framework comprises two key components: Multi-Dimensional Feature Extraction, and Cross-Agent Association and Aggregation, which collectively enable comprehensive instance representation with semantic and motion features, and adaptive cross-agent association and fusion based on a feature graph. Experiments on both the V2X-Seq and Griffin datasets demonstrate that CoopTrack achieves excellent performance. Specifically, it attains state-of-the-art results on V2X-Seq, with 39.0% mAP and 32.8% AMOTA.

Getting Started

Contact

If you have any questions, please contact Jiaru Zhong via email (zhong.jiaru@outlook.com).

Citation

If you find CoopTrack is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@article{zhong2025cooptrack,
  title={CoopTrack: Exploring End-to-End Learning for Efficient Cooperative Sequential Perception},
  author={Zhong, Jiaru and Wang, Jiahao and Xu, Jiahui and Li, Xiaofan and Nie, Zaiqing and Yu, Haibao},
  journal={arXiv preprint arXiv:2507.19239},
  year={2025}
}

Related Works

We are deeply grateful for the following outstanding opensource work; without them, our work would not have been possible.

About

[ICCV2025] CoopTrack: Exploring End-to-End Learning for Efficient Cooperative Sequential Perception

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%