Hello, this is
Zhangchen Xu (徐张晨).
Bio
I am a third-year PhD student at Network Security Lab at the University of Washington, advised by Prof. Radha Poovendran. I am also a part-time research intern at Microsoft GenAI. Prior to UW, I graduated from University of Electronic Science and Technology of China (UESTC) and University of Glasgow (UofG) with a B.E. in Communication Engineering. During my undergrad, I was advised by Prof. Lei Zhang.
My email -> zxu9 [at] uw [dot] edu
Research Interests
My primary interests lie broadly in the fields of machine learning, networking, and security, with a current focus on the safety and alignment of large language models (LLMs). My current research directions include:
- LLM Safety. I investigate emerging security threats in LLMs and explore defense mechanisms. I’m particularly interested in inference-time defenses, including: SafeDecoding (for jailbreaking), CleanGen (for backdoor), Shield (for LLM-integrated Apps).
- LLM Alignment. I train LLMs to be more helpful and better align with human values with synthetic data. I developed Magpie datasets (SOTA synthetic datasets for LLM alignment!) & MagpieLM models (SOTA small language models!).
- Federated Learning. Security, privacy and fairness in large-scale Federated Learning systems. Work includes ACE (contribution evaluation attack) and Brave.
During my undergraduate studies, my research focused on the theory & algorithm of distributed consensus. Work includes Voting Validity, Wireless Distributed Consensus, and Distributed Consensus Network.