I am a second-year Ph.D. student at the University of Texas at Austin, advised by Prof. Zhangyang Wang. Since March 2025, I have been working as a Research Fellow at Anthropic. Prior to that, I received my M.S. and B.S. degrees from Shanghai Jiao Tong University (SJTU) in 2023 and 2020, respectively.
My current research interests lie in LLM safety, alignment and reasoning. If youโ€™re interested in my work, feel free to contact me.

๐Ÿ”ฅ News

  • 2023.06:  ๐ŸŽ‰๐ŸŽ‰ Iโ€™m going to join VITA in August, 2023!
  • 2025.03:  ๐ŸŽ‰๐ŸŽ‰ Iโ€™m going to join Anthropic as a AI safety Research Fellow!

๐Ÿ“ Publications

LLaGA: Large Language and Graph Assistant [ICML 2024]

Runjin Chen, Tong Zhao, Ajay Jaiswal, Neil Shah, Zhangyang Wang

Extracting and Understanding the Superficial Knowledge in Alignment [NAACL 2025]

Runjin Chen, Gabriel Jacob Perin, Xuxi Chen, Xilun Chen, Yan Han, Nina S. T. Hirata , Junyuan Hong, Bhavya Kailkhura

Enhancing Item Tokenization for Generative Recommendation through Self-Improvement

Runjin Chen, Mingxuan Ju, Ngoc Bui, Dimosthenis Antypas, Stanley Cai, Xiaopeng Wu, Leonardo Neves, Zhangyang Wang, Neil Shah, Tong Zhao

Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache [MLSys 2024]

Zhenyu Zhang, Shiwei Liu, Runjin Chen, Bhavya Kailkhura, Beidi Chen, Zhangyang Wang

Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding [Neurips 2024]

Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang

GCF-RD: A Graph-based Contrastive Framework for Semi-Supervised Learning on Relational Databases [CIKM 2022]

Runjin Chen, Tong Li, Yanyan Shen, Luyu Qiu, Kaidi Li, Caleb Chen Cao

GNEM: A Generic One-to-Set Neural Entity Matching [WWW 2021]

Runjin Chen, Yanyan Shen, Dongxiang Zhang

Explaining Neural Networks Semantically and Quantitatively [ICCV 2019 oral]

Runjin Chen*, Hao Chen*, Jie Ren, Ge Huang, Quanshi Zhang

Towards a Deep and Unified Understanding of Deep Neural Models in NLP [ICML 2019]

Chaoyu Guan*, Xiting Wang*, Quanshi Zhang, Runjin Chen, Di He, Xing Xie

๐Ÿ“– Educations

  • 2023 - 2028(expected), PHD in Electrical and Computer Engineering, University of Texas at Austin.
  • 2020 - 2023, Master in Computer Science, Shanghai Jiao Tong University.
  • 2016 - 2020, Bachelor in Computer Science, Shanghai Jiao Tong University.

๐Ÿ’ป Internships

  • 2023.02 - 2023.06, MLE at Bytedance, Shanghai.

  • 2024.06 - 2024.10, Reasearch Intern at Snap, Bellevue.

  • 2025.03 - 2025.09, Reasearch Fellow at Anthropic, Berkeley.

๐ŸŽ– Honors and Awards

  • 2023.03 Outstanding Graduate Student of Shanghai Jiao Tong University.
  • 2021.07 Yangyuanqin Scholarship.
  • 2020.06 Zhiyuan Honor Degree of Bachelor of Engineering.
  • 2018.11 The First Prize in Contemporary Undergraduate Mathematical Contest in Modeling.
  • 2018.03 Meritorious Prize in Mathematical Contest in Modeling.
  • 2017.11 National Scholarship Award (top1% in SJTU).