

Hello, this is
Zhangchen Xu (徐张晨).
Bio
I am a third-year PhD student at Network Security Lab at the University of Washington, advised by Prof. Radha Poovendran. I’m also a part-time research intern at Microsoft GenAI, working with Dr. Yang Liu. Prior to joining UW, I completed a joint B.E. in Communication Engineering from the University of Electronic Science and Technology of China (UESTC) and the University of Glasgow (UofG). During my undergrad, I was advised by Prof. Lei Zhang.
My email -> zxu9 [a-t] uw [d-o-t] edu. Feel free to reach out if you would like to discuss Synthetic Data, Safety, and Post-training of LLMs, SLMs and VLMs.
Research Interests
I work on Generative AI, with a current focus on the post-training of large language models (LLMs). My current research directions include:
Synthetic Data Generation
I conduct data-centric research focused on enhancing LLMs through post-training with synthetic data.
- 🦅 Magpie [ICLR’25] is a family of SOTA synthetic datasets for LLM alignment.
- 🐦 MagpieLM models are SOTA small language models for chat.
- 🐱 KodCode is the largest fully-synthetic open-source dataset providing verifiable solutions and tests for LLM coding.
In addition, I am interested in distilling capabilities from powerful LLMs to more efficient smaller models. My analysis papers in this topic include:
- Larger Models’ Paradox [NAACL’25] examines the impact of response generators for LLM alignment.
- Small Model Learnability Gap investigates how to let small models (≤3B parameters) benefit from long chain-of-thought (CoT) reasoning via distillation.
LLM Safety
I investigate emerging threats in LLMs (e.g., Artprompt [ACL’24], ChatBug [AAAI’25], SafeChain), and explore inference-time defenses (e.g., SafeDecoding [ACL’24], CleanGen [EMNLP’24], Shield [AsiaCCS’24]).
Distributed Algorithms
I have also been working on distributed algorithms during my undergrad & early PhD.
Federated Learning. Work includes ACE [Usenix’24] (contribution evaluation attack) and Brave [AsiaCCS’24].
Distributed Consensus. Work includes Voting Validity [IPDPS’23], Wireless Distributed Consensus, and Distributed Consensus Network.
(see here for full publication list)
Selected WorkKodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding
Zhangchen Xu, Yang Liu, Yueqin Yin, Mingyuan Zhou, Radha Poovendran
Arxiv / Website / Huggingface / Code /
Stronger Models are NOT Stronger Teachers for Instruction Tuning
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Bill Yuchen Lin, Radha Poovendran
NAACL 2025 (Main) | Paper