

Hello, this is
Zhangchen Xu (徐张晨).
Bio
I am a third-year PhD student at Network Security Lab at the University of Washington, advised by Prof. Radha Poovendran. Prior to joining UW, I completed a joint B.E. in Communication Engineering from the University of Electronic Science and Technology of China (UESTC) and the University of Glasgow (UofG). During my undergrad, I was advised by Prof. Lei Zhang.
My email -> zxu9 [a-t] uw [d-o-t] edu. Feel free to reach out if you would like to discuss Synthetic Data, Safety, and Post-training of LLMs, SLMs and VLMs.
Research Interests
I work on Generative AI, with a current focus on the synthetic data generation, post-training, and safety of large language models (LLMs). My current research directions include:
Synthetic Data Generation
I conduct data-centric research focused on enhancing LLMs with synthetic data.
- 🐦 Magpie [ICLR’25] is a family of SOTA synthetic datasets for LLM alignment. MagpieLM models are SOTA small language models for chat.
- 🐱 KodCode [ACL’25] is the largest fully-synthetic open-source dataset providing verifiable solutions and tests for LLM coding.
- 🦁 VisualSphinx is a synthetic open-source dataset for visual logic reasoning.
LLM Post-Training
- Model distillation from powerful LLMs to smaller models. My analysis papers in this topic include:
- Larger Models’ Paradox [NAACL’25] examines the choices of response generators for LLM alignment.
- Small Model Learnability Gap [ACL’25] investigates how to let small models (≤3B parameters) benefit from long chain-of-thought (CoT) reasoning via distillation.
- Reinforcement Learning for enhanced reasoning ability. My papers in this topic include:
- TinyV investigates the impact of false negatives in reinforcement learning with Verifiable Reward (RLVR).
- Temporal Sampling examines the phenomenon of Temporal Forgetting during LLM post-training.
LLM Safety
I investigate emerging threats in LLMs (e.g., Artprompt [ACL’24], ChatBug [AAAI’25], SafeChain [ACL’25]), and explore inference-time defenses (e.g., SafeDecoding [ACL’24], CleanGen [EMNLP’24], Shield [AsiaCCS’24]).
Distributed Algorithms
I have also been working on distributed algorithms during my undergrad & early PhD.
Federated Learning. Work includes ACE [Usenix’24] (contribution evaluation attack) and Brave [AsiaCCS’24].
Distributed Consensus. Work includes Voting Validity [IPDPS’23], Wireless Distributed Consensus, and Distributed Consensus Network.
(see here for full publication list)
Selected WorkKodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding
Zhangchen Xu, Yang Liu, Yueqin Yin, Mingyuan Zhou, Radha Poovendran
ACL 2025 (Findings) | Paper / Website / Huggingface / Code