-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
388 lines (368 loc) · 20.8 KB
/
index.html
File metadata and controls
388 lines (368 loc) · 20.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Chen Wei</title>
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images_new/Rice_Shield_280_Blue.svg">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.4/css/all.min.css" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
</head>
<body>
<div class="container">
<!-- ===== Hero ===== -->
<div class="hero">
<div class="hero-text">
<h1>Chen Wei</h1>
<p>
I am an Assistant Professor of Computer Science at <a href="https://www.rice.edu/">Rice University</a>.
</p>
<p>
I completed my Ph.D. at <a href="https://www.jhu.edu/">Johns Hopkins University</a>, advised by
<a href="https://en.wikipedia.org/wiki/Bloomberg_Distinguished_Professorships">Bloomberg Distinguished Professor</a>
<a href="https://cs.jhu.edu/~ayuille/">Alan L. Yuille</a>. After my PhD, I joined
<a href="https://ai.meta.com/research/">Meta FAIR</a> as a postdoctoral researcher.
During my doctoral studies, I also interned at <a href="https://deepmind.google/">Google DeepMind</a> and Meta FAIR.
I received my B.S. with honors from <a href="https://eecs.pku.edu.cn/en/">Peking University EECS</a>.
</p>
<p>
<strong>[Hiring]</strong> We welcome graduate and undergraduate interns to join our projects. Feel free to <a href="mailto:weichen3012@gmail.com">drop me an email</a>!
</p>
<div class="social-links">
<a href="https://scholar.google.com/citations?hl=en&user=LHQGpBUAAAAJ" target="_blank">
<i class="icon ai ai-google-scholar ai-lg"></i>
</a>
<span class="sep">·</span>
<a href="https://x.com/_Chen_Wei_" target="_blank">
<i class="icon fa-twitter"></i>
</a>
<span class="sep">·</span>
<a href="https://www.linkedin.com/in/chen-wei-252532247/" target="_blank">
<i class="icon fa-linkedin"></i>
</a>
<span class="sep">·</span>
<a href="mailto:weichen3012@gmail.com"><tt>weichen3012[at]gmail.com</tt></a>
</div>
</div>
<div class="hero-photo">
<a href="images_new/profile_3.jpg"><img alt="profile photo" src="images_new/profile_3.jpg"></a>
</div>
</div>
<!-- ===== Research Interest ===== -->
<div class="section">
<div class="section-header"><h2>Research</h2></div>
<p>
My research asks: <strong>how can AI learn to see, reason about, and act in the visual world with minimal supervision?</strong> I approach this from three angles:
</p>
<ul style="padding-left:20px; margin-top:10px; margin-bottom:0;">
<li><strong>Visual Representation Learning:</strong> building foundational visual encoders (<a href="https://arxiv.org/abs/2111.07832">iBOT</a>, <a href="https://arxiv.org/abs/2112.09133">MaskFeat</a>, <a href="https://arxiv.org/abs/2504.13181">Perception Encoder</a>) that power downstream perception at scale.</li>
<li><strong>Generative World Models:</strong> using generative models not just to create, but to understand: learning representations from diffusion (<a href="https://weichen582.github.io/diffmae.html">DiffMAE</a>, <a href="https://dediffusion.github.io/">De-Diffusion</a>) and generating explorable worlds (<a href="https://www.genex.world/">GenEx</a>).</li>
<li><strong>Agentic Vision:</strong> enabling models to reason and act through vision, from game-based reasoning (<a href="https://yunfeixie233.github.io/ViGaL/">Play to Generalize</a>) to tool-augmented visual agents (<a href="https://agent-x.space/pyvision-rl/">PyVision-RL</a>).</li>
</ul>
</div>
<!-- ===== News ===== -->
<div class="section">
<div class="section-header"><h2>News</h2></div>
<ul class="news-list">
<li><span class="news-date">2026</span> Co-organizing the <a href="https://beckschen.github.io/cvpr26wmas">CVPR 2026 Workshop on World Models Meet Active Sensing and Closed-Loop Planning</a>.</li>
<li><span class="news-date">2026</span> Selected as an <a href="https://aaai.org/conference/aaai/aaai-26/new-faculty-highlights-program/">AAAI New Faculty Highlight</a>.</li>
<li><span class="news-date">2026</span> Congratulations to Yunfei Xie — <a href="https://arxiv.org/abs/2506.08011">Play to Generalize</a> accepted to <strong>ICLR 2026</strong>!</li>
<li><span class="news-date">2025</span> <a href="https://arxiv.org/abs/2504.13181">Perception Encoder</a> accepted to NeurIPS 2025 as <strong>Oral</strong> presentation.</li>
<li><span class="news-date">2025</span> Congratulations to <a href="https://yunfeixie233.github.io/">Yunfei Xie</a> for receiving the <a href="https://lambda.ai/research">Lambda Research Grant</a> ($10,000) for his work on <a href="https://arxiv.org/abs/2506.08011">Play to Generalize</a>.</li>
<li><span class="news-date">2025</span> Joined <a href="https://www.rice.edu/">Rice University</a> as Assistant Professor of Computer Science.</li>
</ul>
</div>
<!-- ===== Publications ===== -->
<div class="section">
<div class="section-header">
<h2>Selected Publications</h2>
</div>
<div id="publicationsTable">
<!-- Play to Generalize -->
<div class="pub-row publication">
<div class="pub-image-box">
<div class="one">
<img style="width:100%;height:auto;" src="https://raw.githubusercontent.com/yunfeixie233/ViGaL/main/fig/teaser.png" onerror="this.style.background='#e8e8e8';this.removeAttribute('src')">
</div>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/abs/2506.08011">
<span class="papertitle">Play to Generalize: Learning to Reason Through Game Play</span>
</a>
<br>
<a href="https://yunfeixie233.github.io/" class="author">Yunfei Xie</a>,
<a href="#" class="author">Yinsong Ma</a>,
<a href="#" class="author">Shiyi Lan</a>,
<a href="http://cs.jhu.edu/~ayuille/" class="author">Alan Yuille</a>,
<a href="https://lambert-x.github.io/" class="author">Junfei Xiao</a>,
<strong>Chen Wei</strong>
<br>
<em>ICLR</em>, 2026
<br>
<a href="https://arxiv.org/abs/2506.08011">arXiv</a> /
<a href="https://yunfeixie233.github.io/ViGaL/">project page</a> /
<a href="https://github.com/yunfeixie233/ViGaL">code</a>
</div>
</div>
<!-- PyVision-RL + PyVision -->
<div class="pub-row publication">
<div class="pub-image-box">
<div class="one">
<img style="width:100%;height:auto;" src="https://agent-x.space/pyvision/img/method-zst.drawio.svg" onerror="this.style.background='#e8e8e8';this.style.minHeight='80px';this.removeAttribute('src')">
</div>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/abs/2602.20739">
<span class="papertitle">PyVision-RL: Forging Open Agentic Vision Models via RL</span>
</a>
<br>
<a href="https://zhaoshitian.github.io/" class="author">Shitian Zhao</a>,
<a href="#" class="author">Shaoheng Lin</a>,
<a href="#" class="author">Ming Li</a>,
<a href="#" class="author">Haoquan Zhang</a>,
<a href="#" class="author">Wenshuo Peng</a>,
<a href="#" class="author">Kaipeng Zhang†</a>,
<strong>Chen Wei†</strong>
<br>
<em>arXiv</em>, 2026
<br>
<a href="https://arxiv.org/abs/2602.20739">arXiv</a> /
<a href="https://agent-x.space/pyvision-rl/">project page</a> /
<a href="https://github.com/agents-x-project/PyVision-RL">code</a>
<!-- PyVision (nested sub-block) -->
<div style="margin-top:10px; padding:8px 12px; border-left:2px solid var(--border); font-size:0.85em; color:var(--text-muted);">
<a href="https://arxiv.org/abs/2507.07998">
<span class="papertitle">PyVision: Agentic Vision with Dynamic Tooling</span>
</a>
<br>
<a href="https://zhaoshitian.github.io/" class="author">Shitian Zhao</a>,
<a href="#" class="author">Haoquan Zhang</a>,
<a href="#" class="author">Shaoheng Lin</a>,
<a href="#" class="author">Ming Li</a>,
<a href="#" class="author">Qilong Wu</a>,
<a href="#" class="author">Kaipeng Zhang†</a>,
<strong>Chen Wei†</strong>
<br>
<em>NeurIPS 2025 Workshop on Multi-Turn Interactions in LLMs</em>
<br>
<a href="https://arxiv.org/abs/2507.07998">arXiv</a> /
<a href="https://agent-x.space/pyvision/">project page</a> /
<a href="https://github.com/agents-x-project/PyVision">code</a>
</div>
</div>
</div>
<!-- Perception Encoder -->
<div class="pub-row publication">
<div class="pub-image-box">
<div class="one">
<img style="width:100%;height:auto;" src="https://raw.githubusercontent.com/facebookresearch/perception_models/main/apps/pe/docs/assets/teaser.png" onerror="this.style.background='#e8e8e8';this.removeAttribute('src')">
</div>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/abs/2504.13181">
<span class="papertitle">Perception Encoder: The Best Visual Embeddings are Not at the Output of the Network</span>
</a>
<br>
<a href="https://dbolya.github.io/" class="author">Daniel Bolya*</a>,
<a href="http://www.cs.cmu.edu/~poyaoh/" class="author">Po-Yao Huang*</a>,
<a href="https://peizesun.github.io/" class="author">Peize Sun*</a>,
<a href="https://janghyuncho.github.io/" class="author">Jang Hyun Cho*</a>,
<a href="https://andreamad8.github.io/" class="author">Andrea Madotto*</a>,
<strong>Chen Wei</strong>,
<a href="https://www.linkedin.com/in/tengyu-ma/" class="author">Tengyu Ma</a>,
<a href="#" class="author">Jiale Zhi</a>,
<a href="https://people.eecs.berkeley.edu/~jathushan/" class="author">Jathushan Rajasegaran</a>,
<a href="https://hanoonarasheed.github.io/" class="author">Hanoona Rasheed</a>,
<a href="#" class="author">Junke Wang</a>,
<a href="#" class="author">Marco Monteiro</a>,
<a href="https://howardhsu.github.io/" class="author">Hu Xu</a>,
<a href="#" class="author">Shiyu Dong</a>,
<a href="https://nikhilaravi.com/" class="author">Nikhila Ravi</a>,
<a href="#" class="author">Daniel Li</a>,
<a href="https://pdollar.github.io/" class="author">Piotr Dollár</a>,
<a href="https://feichtenhofer.github.io/" class="author">Christoph Feichtenhofer</a>
<br>
<em>NeurIPS</em>, 2025 <strong class="highlight">Oral</strong>
<br>
<a href="https://arxiv.org/abs/2504.13181">arXiv</a> /
<a href="https://github.com/facebookresearch/perception_models">code</a> /
<a href="https://ai.meta.com/blog/meta-fair-updates-perception-localization-reasoning/">Meta AI Blog</a>
</div>
</div>
<!-- GenEx -->
<div class="pub-row publication">
<div class="pub-image-box">
<div class="one">
<video width="100%" height="auto" autoplay loop muted>
<source src="images_new/genex.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/abs/2412.09624">
<span class="papertitle">GenEx: Generating an Explorable World</span>
</a>
<br>
<a href="https://taiminglu.com/" class="author">Taiming Lu</a>,
<a href="https://www.tshu.io/" class="author">Tianmin Shu</a>,
<a href="https://lambert-x.github.io/" class="author">Junfei Xiao</a>,
<a href="https://openreview.net/profile?id=~Luoxin_Ye1" class="author">Luoxin Ye</a>,
<a href="https://jiahaoplus.github.io/" class="author">Jiahao Wang</a>,
<a href="https://sites.google.com/view/cheng-peng/" class="author">Cheng Peng</a>,
<strong>Chen Wei</strong>,
<a href="https://danielkhashabi.com/" class="author">Daniel Khashabi</a>,
<a href="https://engineering.jhu.edu/ece/faculty/rama-chellappa/" class="author">Rama Chellappa</a>,
<a href="http://cs.jhu.edu/~ayuille/" class="author">Alan Yuille</a>,
<a href="https://beckschen.github.io/" class="author">Jieneng Chen</a>
<br>
<em>ICLR</em>, 2025
<br>
<a href="https://arxiv.org/abs/2412.09624">arXiv</a> /
<a href="https://www.genex.world/">genex.world</a> /
<a href="https://github.com/GenEx-world/genex">code</a>
</div>
</div>
<!-- De-Diffusion -->
<div class="pub-row publication" onmouseout="dediffusion_stop()" onmouseover="dediffusion_start()">
<div class="pub-image-box">
<div class="one">
<div class="two" id="dediffusion_image"><img style="width:100%;height:auto;" src="images_new/dediffusion_text.png"></div>
<img style="width:100%;height:auto;" src="images_new/corgi.jpeg">
</div>
<script type="text/javascript">
function dediffusion_start() { document.getElementById('dediffusion_image').style.opacity = "1"; }
function dediffusion_stop() { document.getElementById('dediffusion_image').style.opacity = "0"; }
dediffusion_stop()
</script>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/abs/2311.00618">
<span class="papertitle">De-Diffusion Makes Text a Strong Cross-Modal Interface</span>
</a>
<br>
<strong>Chen Wei</strong>,
<a href="https://www.cs.jhu.edu/~cxliu/" class="author">Chenxi Liu</a>,
<a href="https://www.cs.jhu.edu/~syqiao/" class="author">Siyuan Qiao</a>,
<a href="https://zhishuai.xyz/" class="author">Zhishuai Zhang</a>,
<a href="http://cs.jhu.edu/~ayuille/" class="author">Alan Yuille</a>,
<a href="https://jiahuiyu.com/" class="author">Jiahui Yu</a>
<br>
<em>CVPR</em>, 2024
<br>
<a href="https://arxiv.org/abs/2311.00618">arXiv</a> /
<a href="https://dediffusion.github.io/">project page</a>
</div>
</div>
<!-- DiffMAE -->
<div class="pub-row publication" onmouseout="diffmae_stop()" onmouseover="diffmae_start()">
<div class="pub-image-box">
<div class="one">
<div class="two" id="diffmae_image"><img style="width:100%;height:auto;" src="diffmae/interpolation/input.png"></div>
<img style="width:100%;height:auto;" src="diffmae/interpolation/2.png">
</div>
<script type="text/javascript">
function diffmae_start() { document.getElementById('diffmae_image').style.opacity = "1"; }
function diffmae_stop() { document.getElementById('diffmae_image').style.opacity = "0"; }
diffmae_stop()
</script>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/abs/2304.03283">
<span class="papertitle">Diffusion Models as Masked Autoencoders</span>
</a>
<br>
<strong>Chen Wei</strong>,
<a href="https://karttikeya.github.io/" class="author">Karttikeya Mangalam</a>,
<a href="http://www.cs.cmu.edu/~poyaoh/" class="author">Po-Yao Huang</a>,
<a href="https://lyttonhao.github.io/" class="author">Yanghao Li</a>,
<a href="https://haoqifan.github.io/" class="author">Haoqi Fan</a>,
<a href="https://howardhsu.github.io/" class="author">Hu Xu</a>,
<a href="https://csrhddlam.github.io/" class="author">Huiyu Wang</a>,
<a href="https://cihangxie.github.io/" class="author">Cihang Xie</a>,
<a href="http://cs.jhu.edu/~ayuille/" class="author">Alan Yuille</a>,
<a href="https://feichtenhofer.github.io/" class="author">Christoph Feichtenhofer</a>
<br>
<em>ICCV</em>, 2023
<br>
<a href="https://arxiv.org/abs/2304.03283">arXiv</a> /
<a href="https://weichen582.github.io/diffmae.html">project page</a> /
<a href="https://www.marktechpost.com/2023/04/11/a-new-ai-research-integrates-masking-into-diffusion-models-to-develop-diffusion-masked-autoencoders-diffmae-a-self-supervised-framework-designed-for-recognizing-and-generating-images-and-videos/">press</a>
</div>
</div>
<!-- MaskFeat -->
<div class="pub-row publication" onmouseout="maskfeat_stop()" onmouseover="maskfeat_start()">
<div class="pub-image-box">
<div class="one">
<div class="two" id="maskfeat_image"><img style="width:100%;height:auto;" src="images_new/maskfeat_after.png"></div>
<img style="width:100%;height:auto;" src="images_new/maskfeat_before.png">
</div>
<script type="text/javascript">
function maskfeat_start() { document.getElementById('maskfeat_image').style.opacity = "1"; }
function maskfeat_stop() { document.getElementById('maskfeat_image').style.opacity = "0"; }
maskfeat_stop()
</script>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/pdf/2112.09133.pdf">
<span class="papertitle">Masked Feature Prediction for Self-Supervised Visual Pre-Training</span>
</a>
<br>
<strong>Chen Wei*</strong>,
<a href="https://haoqifan.github.io/" class="author">Haoqi Fan</a>,
<a href="https://vcl.ucsd.edu/~sxie/" class="author">Saining Xie</a>,
<a href="https://chaoyuan.org/" class="author">Chao-Yuan Wu</a>,
<a href="http://cs.jhu.edu/~ayuille/" class="author">Alan Yuille</a>,
<a href="https://feichtenhofer.github.io/" class="author">Christoph Feichtenhofer*</a>
<br>
<em>CVPR</em>, 2022
<br>
<a href="https://arxiv.org/pdf/2112.09133.pdf">arXiv</a> /
<a href="https://github.com/facebookresearch/SlowFast">code</a> /
<a class="highlight" href="https://www.paperdigest.org/2023/04/most-influential-cvpr-papers-2023-04/">Most Influential CVPR 2023 Papers</a>
</div>
</div>
<!-- iBOT -->
<div class="pub-row publication" onmouseout="ibot_stop()" onmouseover="ibot_start()">
<div class="pub-image-box">
<div class="one">
<div class="two" id="ibot_image"><img style="width:100%;height:auto;" src="images_new/ibot_after.png"></div>
<img style="width:100%;height:auto;" src="images_new/ibot_before.png">
</div>
<script type="text/javascript">
function ibot_start() { document.getElementById('ibot_image').style.opacity = "1"; }
function ibot_stop() { document.getElementById('ibot_image').style.opacity = "0"; }
ibot_stop()
</script>
</div>
<div class="pub-text-box">
<a href="https://arxiv.org/pdf/2111.07832.pdf">
<span class="papertitle">iBOT: Image BERT Pre-Training with Online Tokenizer</span>
</a>
<br>
<a href="https://shallowtoil.github.io/" class="author">Jinghao Zhou</a>,
<strong>Chen Wei</strong>,
<a href="https://csrhddlam.github.io/" class="author">Huiyu Wang</a>,
<a href="https://shenwei1231.github.io/" class="author">Wei Shen</a>,
<a href="https://cihangxie.github.io/" class="author">Cihang Xie</a>,
<a href="http://cs.jhu.edu/~ayuille/" class="author">Alan Yuille</a>,
<a href="http://www.taokong.org/" class="author">Tao Kong</a>
<br>
<em>ICLR</em>, 2022
<br>
<a href="https://arxiv.org/pdf/2111.07832.pdf">arXiv</a> /
<a href="https://github.com/bytedance/ibot">code</a> /
<a href="https://medium.com/syncedreview/meet-ibot-a-masked-image-modelling-framework-that-enables-bert-like-pretraining-for-vision-da01002115e7">press</a> /
<a class="highlight" href="https://dinov2.metademolab.com/">Improved and scaled up to DINOv2 by Meta AI.</a>
</div>
</div>
</div><!-- end #publicationsTable -->
</div><!-- end .section Publications -->
<!-- ===== Footer ===== -->
<div class="footer">
Last update: Mar. 2026
</div>
</div><!-- end .container -->
</body>
</html>