Hello, this is
Zhangchen Xu .
Bio
I am a second-year PhD student at Network Security Lab at the University of Washington, Seattle, advised by Prof. Radha Poovendran. Prior joining UW, I received my bachelor’s (joint) degree in Communication Engineering at University of Electronic Science and Technology of China (UESTC) and University of Glasgow (UofG). During my undergrad, I was advised by Prof. Lei Zhang.
My email -> zxu9 [at] uw [dot] edu
[Upcoming Travel] USENIX Security 2024, Philadelphia, August 14-16.
Research Interests
My research interests lie broadly in security, privacy and fairness of machine learning, with a current emphasis on the safety and alignment of large language models (LLMs). My current research directions include:
- LLM Safety. I investigate emerging security threats in LLMs and explore defense mechanisms. My particular interest lies in inference-time defenses, including: SafeDecoding (for jailbreaking), CleanGen (for backdoor), Shield (for LLM-integrated Apps).
- LLM (Open) Alignment. I explore the science behind LLM alignment to make LLMs more helpful and better align with human values. I developed Magpie datasets & models.
- Federated Learning. Security, privacy and fairness in large-scale Federated Learning systems: Work includes ACE and Brave.
During my undergraduate studies, my research focused on the theoretical foundations and algorithmic aspects of distributed consensus: Voting Validity, Wireless Distributed Consensus.
(see here for full publication list)
Selected WorkMagpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, Bill Yuchen Lin
Arxiv | Paper / Website / Huggingface / Code / Demo
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran
Usenix Security 2024 | Paper