Yidong Ouyang

Hey, I am Yidong Ouyang, a Ph.D. student at UCLA where I am fortunate to be advised by Guang Cheng. Prior to this, I earned my MPhil from CUHK Shenzhen, where I was advised by Liyan Xie. My research interests lie in Generative Models, especially diffusion models.

Email  /  CV  /  Github  /  Google Scholar

profile photo
Research
Transfer Learning for Diffusion Models
Yidong Ouyang, Liyan Xie, Hongyuan Zha, Guang Cheng
NeurIPS 2024.
arxiv

We prove that the optimal diffusion model for the target domain integrates pre-trained diffusion models on the source domain with additional guidance from a domain classifier. We further extend TGDP to a conditional version for modeling the joint distribution of data and its corresponding labels, together with two additional regularization terms to enhance the model performance.

MissDiff: Training Diffusion Models on Tabular Data with Missing Values
Yidong Ouyang, Liyan Xie, Chongxuan Li, Guang Cheng
ICML workshop on Structured Probabilistic Inference & Generative Modeling 2023.
arxiv

We first observe that the widely adopted "impute-then-generate" pipeline may lead to a biased learning objective. Then we propose to mask the regression loss of Denoising Score Matching in the training phase. We prove the proposed method is consistent in learning the score of data distributions, and the proposed training objective serves as an upper bound for the negative likelihood in certain cases.

Improving Adversarial Robustness Through the Contrastive-Guided Diffusion Process
Yidong Ouyang, Liyan Xie, Guang Cheng
ICML 2023.
arxiv

We first analyze the optimality condition of synthetic distribution for achieving non-trivial robust accuracy. We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness. Thus, we propose the Contrastive-Guided Diffusion Process (Contrastive-DP), which adopts the contrastive loss to guide the diffusion model in data generation.

Attention Enables Zero Approximation Error
Zhiying Fang, Yidong Ouyang, Ding-Xuan Zhou, Guang Cheng

arxiv

We theoretically prove the self-attention model can achieve zero approximation error. Moreover, our proposed model can avoid the classical trade-off between approximation error and sample error in the mean squared error analysis.

Generalizing to Unseen Domains: A Survey on Domain Generalization
Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin
IEEE TKDE, IJCAI 2021 survey track.
arxiv

We presents the first review for recent advances in domain generalization, including the theory of generalization, the taxonomy of domain generalization methods, and the potential research topics.

Domain Generalization by Dropping Spurious Information out
Yidong Ouyang, Siteng Huang, Jindong Wang, Donglin Wang

paper

We propose a mutual information maximum module to explicitly drop superfluous information related to the domain labels. We thoroughly reviewed our methods from both theoretical and empirical perspective and demonstrated the connections and advantages to domain adversarial training and triplet loss.

Robust Learning with Frequency Domain Regularization
Yidong Ouyang, Weiyu Guo, Adam Dziedzic, Sanjay Krishnan.

paper

We investigated the regularization technique from a Fourier perspective and pinpointed an extreme small but valid spectral range for different layers. our regularization technique reduces the generalization gap on computer vision benchmarks and improves the robustness of models, especially against low frequency attack.

Learning Efficient Convolutional Networks through Irregular Convolutional Kernels
Weiyu Guo, Jiabin Ma, Yidong Ouyang, Liang Wang, Yongzhen Huang.
Neurocomputing.
paper

We propose RotateConv kernels as an interpolation-based method that transforms traditional square kernels to line segments. Our approach can massively reduce the number of parameters and calculations while maintaining acceptable performance.

Machine Learning Workshops & Reading Groups
Some of the workshops and other reading groups I've given:
Diffusion Model for Solving Schrödinger Bridge Problem, CUHKSZ, 2023.5.28.

Synthetic generation for tabular data, CUHKSZ, 2022.8.26.

All about OOD generalization I--invariant representation, CUHKSZ, 2021.8.24.

Regularization in Spectral Domain, CUFE, 2020.12.

Teaching
Teaching Assistant, STA3020 Statistical Inference 2021, CUHKSZ.
Teaching Assistant, DDA4010 Bayesian Statistics 2022, CUHKSZ.
Service
Conference reviewer: NeurIPS 2023, 2024, ICLR 2024, 2025, ICML 2024, AAAI 2025
Journal reviewer: TNNLS, TKDE

Website credits go to Jon Barron!