hi@zhangchen
_
avatar avatar

Hello, this is

Zhangchen Xu (徐张晨).

Bio

I am a third-year PhD student at Network Security Lab at the University of Washington, advised by Prof. Radha Poovendran. Prior to joining UW, I completed a joint B.E. in Communication Engineering from the University of Electronic Science and Technology of China (UESTC) and the University of Glasgow (UofG). During my undergrad, I was advised by Prof. Lei Zhang.

My email -> zxu9 [a-t] uw [d-o-t] edu. Feel free to reach out if you would like to discuss Synthetic Data, Safety, and Post-training of LLMs, SLMs and VLMs.

Research Interests

I work on Generative AI, with a current focus on the synthetic data generation, post-training, and safety of large language models (LLMs). My current research directions include:

Synthetic Data Generation

I conduct data-centric research focused on enhancing LLMs with synthetic data.

LLM Post-Training

  1. Model distillation from powerful LLMs to smaller models. My analysis papers in this topic include:
  1. Reinforcement Learning for enhanced reasoning ability. My papers in this topic include:
  • TinyV investigates the impact of false negatives in reinforcement learning with Verifiable Reward (RLVR).
  • Temporal Sampling examines the phenomenon of Temporal Forgetting during LLM post-training.

LLM Safety

I investigate emerging threats in LLMs (e.g., Artprompt [ACL’24], ChatBug [AAAI’25], SafeChain [ACL’25]), and explore inference-time defenses (e.g., SafeDecoding [ACL’24], CleanGen [EMNLP’24], Shield [AsiaCCS’24]).

Distributed Algorithms

I have also been working on distributed algorithms during my undergrad & early PhD.

Federated Learning. Work includes ACE [Usenix’24] (contribution evaluation attack) and Brave [AsiaCCS’24].

Distributed Consensus. Work includes Voting Validity [IPDPS’23], Wireless Distributed Consensus, and Distributed Consensus Network.

Selected Work (see here for full publication list)

KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding

Zhangchen Xu, Yang Liu, Yueqin Yin, Mingyuan Zhou, Radha Poovendran

ACL 2025 (Findings) | Paper / Website / Huggingface / Code

Stronger Models are NOT Stronger Teachers for Instruction Tuning

Zhangchen Xu, Fengqing Jiang, Luyao Niu, Bill Yuchen Lin, Radha Poovendran

NAACL 2025 (Main) | Paper / Dataset

Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing

Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, Bill Yuchen Lin

ICLR 2025 | Paper / Website / Huggingface / Code / Demo / 新智元

ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates

Fengqing Jiang*, Zhangchen Xu*, Luyao Niu*, Bill Yuchen Lin, Radha Poovendran

AAAI 2025 | Paper / Code

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning

Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran

Usenix Security 2024 | Paper / Slides

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models

Yuetai Li*, Zhangchen Xu*, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, Radha Poovendran

EMNLP 2024 (Main) | Paper / Code

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, Radha Poovendran

ACL 2024 (Main) | Paper / Code / Poster / Slides

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

Fengqing Jiang*, Zhangchen Xu*, Luyao Niu*, Zhen Xiang, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran

ACL 2024 (Main) | Paper / Code / Poster