Yidong Ouyang

Hey, I am Yidong Ouyang, a 2nd year Ph.D. student at CUHK SZ where I am fortunate to be advised by Liyan Xie and Guang Cheng. My research interests broadly include Generative Model, Deep learning theory, and Truthworthy AI.

During undergrad, I worked with Weiyu Guo on regularization in spectral domain and model compression. And I visited Westlake University to work on the domain generalization project under Professor Donglin Wang’s supervision. I also collaborated with Jindong Wang during my visit to the Institute of Computing Technology, Chinese Academic of Science. I've received the National Scholarship in 2018.

Email  /  CV  /  Github  /  Google Scholar

profile photo
Research
MissDiff: Training Diffusion Models on Tabular Data with Missing Values
Yidong Ouyang, Liyan Xie, Chongxuan Li, Guang Cheng
ICML workshop on Structured Probabilistic Inference & Generative Modeling 2023.
arxiv

We first observe that the widely adopted "impute-then-generate" pipeline may lead to a biased learning objective. Then we propose to mask the regression loss of Denoising Score Matching in the training phase. We prove the proposed method is consistent in learning the score of data distributions, and the proposed training objective serves as an upper bound for the negative likelihood in certain cases.

Improving Adversarial Robustness Through the Contrastive-Guided Diffusion Process
Yidong Ouyang, Liyan Xie, Guang Cheng
ICML 2023.
arxiv

We first analyze the optimality condition of synthetic distribution for achieving non-trivial robust accuracy. We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness. Thus, we propose the Contrastive-Guided Diffusion Process (Contrastive-DP), which adopts the contrastive loss to guide the diffusion model in data generation.

Attention Enables Zero Approximation Error
Zhiying Fang, Yidong Ouyang, Ding-Xuan Zhou, Guang Cheng

arxiv

We theoretically prove the self-attention model can achieve zero approximation error. Moreover, our proposed model can avoid the classical trade-off between approximation error and sample error in the mean squared error analysis.

Generalizing to Unseen Domains: A Survey on Domain Generalization
Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin
IEEE TKDE, IJCAI 2021 survey track.
arxiv

We presents the first review for recent advances in domain generalization, including the theory of generalization, the taxonomy of domain generalization methods, and the potential research topics.

Domain Generalization by Dropping Spurious Information out
Yidong Ouyang, Siteng Huang, Jindong Wang, Donglin Wang

paper

We propose a mutual information maximum module to explicitly drop superfluous information related to the domain labels. We thoroughly reviewed our methods from both theoretical and empirical perspective and demonstrated the connections and advantages to domain adversarial training and triplet loss.

Robust Learning with Frequency Domain Regularization
Yidong Ouyang, Weiyu Guo, Adam Dziedzic, Sanjay Krishnan.

paper

We investigated the regularization technique from a Fourier perspective and pinpointed an extreme small but valid spectral range for different layers. our regularization technique reduces the generalization gap on computer vision benchmarks and improves the robustness of models, especially against low frequency attack.

Learning Efficient Convolutional Networks through Irregular Convolutional Kernels
Weiyu Guo, Jiabin Ma, Yidong Ouyang, Liang Wang, Yongzhen Huang.
Neurocomputing.
paper

We propose RotateConv kernels as an interpolation-based method that transforms traditional square kernels to line segments. Our approach can massively reduce the number of parameters and calculations while maintaining acceptable performance.

Machine Learning Workshops & Reading Groups
Some of the workshops and other reading groups I've given:
Diffusion Model for Solving Schrödinger Bridge Problem, CUHKSZ, 2023.5.28.

Synthetic generation for tabular data, CUHKSZ, 2022.8.26.

All about OOD generalization I--invariant representation, CUHKSZ, 2021.8.24.

Regularization in Spectral Domain, CUFE, 2020.12.

Teaching
Teaching Assistant, STA3020 Statistical Inference 2021.
Teaching Assistant, DDA4010 Bayesian Statistics 2022.
Service
Conference reviewer: NeurIPS 2023, ICLR 2024, ICML 2024
Journal reviewer: TNNLS, TKDE

Website credits go to Jon Barron!