This is a repository for organizing articles related to Domain generalization, OOD, optimization, data-centric learning, prompt learning, robutness, and causality. Most papers are linked to my reading notes. Feel free to visit my personal homepage and contact me for collaboration and discussion.
🔆
About Me I'm the second year Ph.D. student at the State Key Laboratory of Pattern Recognition, the University of Chinese Academy of Sciences, advised by Prof. Tieniu Tan. I have also spent time at Microsoft, advised by Prof. Jingdong Wang, alibaba DAMO Academy, work with Prof. Rong Jin.
🔥 Updated 2023-5-26
- Recent Domain generalization, test time adaptation papers on ICLR 2023, CVPR 2023 and Arxiv have been updated.
- Our paper Domain-Specific Risk Minimization for Out-of-Distribution Generalization has been accepted by SIGKDD 2023. [Code] [Reading Notes]
- Our paper AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation has been accepted by ICML 2023. [Code] [Reading Notes]
- Our paper Free Lunch for Domain Adversarial Training: Environment Label Smoothing has been accepted by ICLR 2023. [Code] [Reading Notes]
- Our paper Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation has been accepted by NeurIPS ML Safety workshop. [Code]
- Our paper Towards Principled Disentanglement for Domain Generalization has been selected for an CVPR ORAL presentation.
😊 [Reading Notes] [Code] [paper] - Papers about test-time adaptation methods have been updated.
Table of Contents (ongoing)
- Generalization/OOD
- Test-time adaptation
- Robutness/Adaptation/Fairness/OOD Detection
- Causality
- Data-Centric/Prompt
- Optimization/GNN/Energy/Others
Generalization/OOD
2023
- ICLR Free Lunch for Domain Adversarial Training: Environment Label Smoothing(环境标签平滑,一行代码提升对抗学习的稳定性和泛化性). [Code] [Reading Notes]
- ICLR Out-of-Distribution Representation Learning for Time Series Classification(从OOD的角度考虑时序分类的问题)
- ICLR Contrastive Learning for Unsupervised Domain Adaptation of Time Series(用对比学习对其类间分布为时序DA学一个好的表征)
- ICLR Pareto Invarian Risk Minimization(通过多目标优化角度理解与缓解OOD/DG优化难问题)
- ICLR Fairness and Accuracy under Domain Generalization(不仅考虑泛化的性能,也考虑泛化的公平性)
- Arxiv Adversarial Style Augmentation for Domain Generalization(对抗学习添加图像扰动以提升模型泛化性)
- Arxiv CLIPood: Generalizing CLIP to Out-of-Distributions(使用预训练的CLIP模型,克服domain shift and open class两个问题)
- ICML AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation(用KNN进行测试时间自适应,从理论上分析了TTA work的原因)[Code] [Reading Notes]
- SIGKDD Domain-Specific Risk Minimization for Out-of-Distribution Generalization(每个域学习单独的分类器,测试阶段根据entropy动态组合)[Code][Reading Notes]
- CVPR Federated Domain Generalization with Generalization Adjustment(为联邦域泛化(FedDG)提供了一个新的新的减小方差的正则项以鼓励公平性)
- CVPR Distribution Shift Inversion for Out-of-Distribution Prediction(TTA方法,将OoD测试样本用仅在源分布上训练的扩散模型向训练分布转移然后再测试)
- CVPR SFP: Spurious Feature-targeted Pruning for Out-of-Distribution Generalization(通过移除那些强烈依赖已识别的虚假特征的网络分支来实现modular risk minimization (MRM))
- CVPR Improved Test-Time Adaptation for Domain Generalization(使用一个具有可学习参数的损失函数,而不是预定义的函数)
2022
- CVPR Oral Towards Principled Disentanglement for Domain Generalization(将解耦用于DG,新理论,新方法)
- Arxiv How robust are pre-trained models to distribution shift?(自监督模型比有监督以及无监督模型更鲁棒,在小部分OOD数据上重新训练classifier提升很大)
- ICML A Closer Look at Smoothness in Domain Adversarial Training(平滑分类损失可以提高域对抗训练的泛化性能)
- CVPR Bayesian Invariant Risk Minimization(缓解IRM在模型过拟合时退化为ERM的问题)
- CVPR Towards Unsupervised Domain Generalization(关注模型预训练的过程对DG任务的影响,设计了一个在DG数据集无监督预训练的算法)
- CVPR PCL: Proxy-based Contrastive Learning for Domain Generalization(直接采用有监督的对比学习用于DG效果并不好,本文提出可行方法)
- CVPR Style Neophile: Constantly Seeking Novel Styles for Domain Generalization(本文提出了一种新的方法,能够产生更多风格的数据)
- Arxiv WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series Tasks(一个关于时序数据OOD的多个benchmark)
- Arxiv A Broad Study of Pre-training for Domain Generalization and Adaptation(深入研究了预训练对于DA,DG任务的作用,简单的使用目前最好的backbone足已取得SOTA的效果)
- Arxiv Domain Generalization by Mutual-Information Regularization with Pre-trained Models(使用预训练模型的特征指导finetune的过程,提高泛化能力)
- ICLR Oral A Fine-Grained Analysis on Distribution Shift(如何准确的定义distribution shift,以及如何系统的测量模型的鲁棒性)
- ICLR Oral Fine-Tuning Distorts Pretrained Features and Underperforms Out-of-Distribution(fine-tuning(微调)和linear probing相辅相成)
- ICLR Spotlight Towards a Unified View of Parameter-Efficient Transfer Learning(统一的参数高效微调理论框架)
- ICLR Spotlight How Do Vision Transformers Work?(Vision Transformers (ViTs)的优良特性)
- ICLR Spotlight On Predicting Generalization using GANs(使用源域数据训练出的GAN来预测测试误差)
- ICLR Poster Uncertainty Modeling for Out-of-Distribution Generalization(域泛化时考虑特征的不确定性,一种新的数据增强方法)
- ICLR Poster Gradient Matching for Domain Generalization(鼓励来自不同域的梯度之间的内积更大)
- ICML DNA: Domain Generalization with Diversified Neural Averaging(classifier ensemble,即对分类器进行集成。本文从理论和实验角度讨论了ensemble与DG任务的connection。)
- ICML Model Agnostic Sample Reweighting for Out-of-Distribution Learning(bi-level的去找一种有效的训练样本加权方式)
- ICML Sparse Invariant Risk Minimization(利用全局稀疏性约来防止伪特征在训练过程被使用)
- Arxiv grounding visual representations with texts for domain generalization(用跨模态的数据作为模型的监督信息可以提升泛化性)
- Arxiv On the Strong Correlation Between Model Invarianceand Generalization(模型预测的不变性与泛化性有强相关,这里的不变性是对x不同perturbation预测的不变性)
- NeurIPS Probable Domain Generalization via Quantile Risk Minimization(将DG建模成概率泛化的问题,既不是worst-case,也不是average performance)
- NeurIPS Improving Multi-Task Generalization via Regularizing Spurious Correlation(去除对任务标签的虚假依赖,从而提升多任务学习的效果)
- NeurIPS Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction(从理论上解释归一化层使得损失面锐度降低,GD更易优化)
- NeurIPS Assaying Out-Of-Distribution Generalization in Transfer Learning(全面的对模型鲁棒性的定义提出新的见解)
- NeurIPS On the Strong Correlation Between Model Invariance and Generalization(对对泛化与不变性之间的关系进行定量的分析)
- NeurIPS Ensemble of Averages: Improving Model Selectionand Boosting Performance in Domain Generalization(训练过程中OOD数据性能波动很大)
- NeurIPS Diverse Weight Averaging for Out-of-Distribution Generalization(沿着训练轨迹平均获得的权重)
- NeurIPS Improving Out-of-Distribution Generalization byAdversarial Training with Structured Priors(使用domain specific structured low-rank perturbations来对抗学习提升OOD性能)
- NeurIPS Outstanding On-Demand Sampling:Learning Optimally from Multiple Distributions(一个有理论保证的多域学习算法,达到了目前最低的sample complexity)
- Arxiv On Feature Learning in the Presence of Spurious Correlations(ERM已经能够学到很好的特征了)
- Arxiv Simulating Bandit Learning from User Feedback for Extractive Question Answering(引入少量human evaluation可以提升模型泛化性)
- ICLR Uncertainty Modeling for Out-of-Distribution Generalization(改变图象均值/方差来做数据增强,均值方差考虑batch中的不确定性)
2021
- ICML Improved OOD Generalization via Adversarial Training and Pre-training(从理论上表明,一个预先训练的模型对输入扰动具有更强的鲁棒性,那么对下游OOD数据的泛化可以提供更好的初始化。)
- ICCV CrossNorm and SelfNorm for Generalization under Distribution Shifts(思路简单的正则化技术用于DG)
- ICCV A Style and Semantic Memory Mechanism for Domain Generalization(尝试着去使用intra-domain style invariance来提升模型的泛化性能)
- Arxiv: Towards a Theoretical Framework of Out-of-Distribution Generalization (新理论)
- Arxiv(Yoshua Bengio) Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization (当OOD遇到信息瓶颈理论)
- Arxiv Generalization of Reinforcement Learning with Policy-Aware Adversarial Data Augmentation
- Arxiv Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation(使用知识蒸馏作为正则化手段)
- Arxiv Delving Deep into the Generalization of Vision Transformers under Distribution Shifts (视觉transformer的泛化性讨论)
- Arxiv Training Data Subset Selection for Regression with Controlled Generalization Error (从大量训练实例中选择数据子集,并保持可比的泛化性)
- Arxiv(MIT) Measuring Generalization with Optimal Transport (网络复杂度与泛化性的理论研究,)
- Arxiv(SJTU) OoD-Bench: Benchmarking and Understanding Out-of-Distribution Generalization Datasets and Algorithms (揭示OOD的评测标准尚不完善并提出评测方案)
- Arxiv (Tsinghu) Domain-Irrelevant Representation Learning for Unsupervised Domain Generalization (新的task:无监督的DG,源域的数据标签不可以用)
- ICML Oral: Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? (彩票模型寻找模型中泛化能力更强的小模型)
- ICML Oral:Domain Generalization using Causal Matching (contrastive loss特征对齐+特征不变性约束)
- ICML Oral: Just Train Twice: Improving Group Robustness without Training Group Information
- ICML Spotlight: Environment Inference for Invariant Learning (没有域标签如何学习域不变性特征?)
- ICLR Poster: Understanding the failure modes of out-of-distribution generalization (OOD失败的两种原因)
- ICLR Poster: An Empirical Study of Invariant Risk Minimization(对IRM的实验性探索,如可见域的diversity如何影响IRM性能等)
- ICLR Poster In Search of Lost Domain Generalization (没有model selection的方法不是好方法,如何根据验证集选择模型?)
- ICLR Poster Modeling the Second Player in Distributionally Robust Optimization(用对抗学习建模DRO的uncertainty set)
- ICLR Poster Learning perturbation sets for robust machine learning(使用生成模型学习扰动集合)
- ICLR Spotlight(Yoshua Bengio) Systematic generalisation with group invariant predictions (将每个类分成不同的domain(environment inference,然后约束每个域的特征尽可能一致从而避免虚假依赖))
- CVPR Oral: Reducing Domain Gap by Reducing Style Bias (channel-wise 均值作为图像风格,减少CNN对风格的依赖)
- AISTATS Linear Regression Games: Convergence Guarantees to Approximate Out-of-Distribution Solutions
- AISTATS Oral Does Invariant Risk Minimization Capture Invariance(IRM只有在满足特定条件的情况下才能真正捕捉不变形特征)
- NeurIPS Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests(本文使用因果工具设计了一个可行的算法,将反事实推理与域泛化(OOD)联系起来,进行有效的“stress test”,比如变化一个句子包含的的gender信息,看最后情感分类会不会改变。)
- NeurIPS Adaptive Risk Minimization: Learning to Adapt to Domain Shift(利用未标记的数据来更好地处理新domain引起的distribution shift)
- NeurIPS An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers(基于domain adaptation的理论测量方法不能准确地捕捉OOD泛化行为)
- NeurIPS Spotlight Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization(在test的阶段,我们在依然会选择更新模型头部的linear层)
- NeurIPS Why Do Better Loss Functions Lead to Less Transferable Features?(本文研究了训练目标的选择如何影响卷积神经网络在ImageNet上训练得到的可迁移性)
2020
- Arxiv I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models(将Domain Adaptation看作是因果图推理问题)
- Arxiv (Stanford)Distributionally Robust Lossesfor Latent Covariate Mixtures.
- NeurIPS Energy-based Out-of-distribution Detection(使用能量模型检测OOD样本)
- NeurIPS Fairness without demographics through adversarially reweighted learning (利用对抗学习对难样本进行加权,希望加权后的样本使得分类器的损失更大)
- NeurIPS Self-training Avoids Using Spurious Features Under Domain Shift (使用target domain的无标签数据训练有助于避免使用虚假特征)
- NeurIPS What shapes feature representations? Exploring datasets, architectures, and training(Simplicity Bias,神经网络倾向于拟合“容易”的特征)
- Arxiv Invariant Risk Minimization (奠基之作,跳出经验风险最小化--不变风险最小化)
- ICLR Poster The Risks of Invariant Risk Minimization (不变风险最小化的缺陷:域数目过少IRM即失败)
- ICLR Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization(GroupDRO: 拥有强正则的DRO)
- ICML An investigation of why overparameterizationexacerbates spurious correlations(神经网络的过参数化是造成网络使用虚假相关性的重要原因)
- ICML UDA workshop Learning Robust Representations with Score Invariant Learning(非归一化统计模型:用能量学习的方式做OOD)
OLD but Important
- ICML 2018 Oral (Stanford) Fairness Without Demographics in Repeated Loss Minimization.
- ICCV 2017 CCSA--Unified Deep Supervised Domain Adaptation and Generalization (对比损失对齐源域目标域样本空间)
- JSTOR (Peters)Causal inference by using invariant prediction: identification and confidence intervals.
- ICML 2015 [Towards a Learning Theory of Cause-Effect Inference](使用kernel mean embedding和分类器进行casual inference )
- IJCAI 2020 (CMU) Causal Discovery from Heterogeneous/Nonstationary Data
Survey
Test-time Adaptation
- ICML AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation(用KNN进行测试时间自适应,从理论上分析了TTA work的原因)[Code] [Reading Notes]
- NeurIPS 2021 [Spotlight] Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization(在test的阶段,我们在依然会选择更新模型头部的linear层)
- CVPR 2021 Adaptive Methods for Real-World Domain Generalization(测试时输入source domain embedding,即test时利用domain信息)
- ICLR 2021 [Spotlight] Tent: Fully Test-Time Adaptation by Entropy Minimization(测试时最小化模型预测的entropy)
- ICCV 2021 Test-Agnostic Long-Tailed Recognition y Test-Time Aggregating Diverse Experts with Self-Supervision(测试时优化样本的自监督损失)
- NeurIPS 2022 Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering(发现源和目标域中的集群,并将目标集群与源集群进行匹配,以改进泛化。)
- NeurIPS 2022 Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models(测试阶段根据最小化预测熵从而更新prompt)
- NeurIPS 2022 MEMO: Test Time Robustness via Adaptation and Augmentation(测试阶段数据增强+最小化熵)
- CVPR 2022 Continual Test-Time Domain Adaptation(从一个源域adapt到一系列连续改变的目标域)
- Arxiv A Simple Test-Time Method for Out-of-Distribution Detection(test time adaptation for OOD detection)
- SIGKDD Domain-Specific Risk Minimization for Out-of-Distribution Generalization(每个域学习单独的分类器,测试阶段根据entropy动态组合)[Code][Reading Notes]
- CVPR Improved Test-Time Adaptation for Domain Generalization(使用一个具有可学习参数的损失函数,而不是预定义的函数)
- CVPR Feature Alignment and Uniformity for Test Time Adaptation(将TTA作为一个由于源域和目标域之间的域差距而导致的特征修订问题:确保当前批和所有先前批之间的表示之间的均匀性,一致性)
- CVPR TIPI: Test Time Adaptation with Transformation Invariance(为了克服小batch的问题提出了一个新的loss)
Robutness/Adaptation/Fairness/OOD Detection
2022
- Arxiv Are Vision Transformers Robust to Spurious Correlations?(对ViT鲁棒性的研究,更大的模型和更多的训练前数据可以显著提高对伪相关的鲁棒性,预训练数据较少反而不如CNN)
- CVPR Exploring Domain-Invariant Parameters for Source FreeDomain Adaptation(相比于以往工作探索域不变特征,该工作旨在寻找域不变参数)
- CVPR CENet: Consolidation-and-Exploration Network for Continuous DomainAdaptation(本文说他提出了continuous DA的概念,但是ICML 18就已经提出了呀?)
- CVPR Slimmable Domain Adaptation(Adaptation的对象不仅应该是数据,本文考虑下游设备的adaptation。)
- NeurIPS Outstanding Is Out-of-Distribution Detection Learnable?(各种场景下的OOD detection的PAC理论)
- ICML Out-of-Distribution Detection with Deep Nearest Neighbors(用KNN做OOD detection)
- Arxiv A Simple Test-Time Method for Out-of-Distribution Detection(test time adaptation for OOD detection)
- Arxiv RobArch: Designing Robust Architectures against Adversarial Attacks(对如何设计鲁棒性更强的模型结构做了大量的实验验证)
2021
- ICLR Poster Learning perturbation sets for robust machine learning(使用生成模型学习扰动集合)
- ICCV Generalized Source-free Domain Adaptation(不使用源域数据,只有源域预训练的模型时如何adaptation并保证source domain的性能)
- ICCV Adaptive Adversarial Network for Source-free Domain Adaptation(在模型优化过程中,我们能否寻找一种新的针对目标的分类器,并使其适应目标特征)
- ICCV Gradient Distribution Alignment Certificates Better Adversarial Domain Adaptation(该算法通过特征提取器和鉴别器之间的对抗学习来减小特征梯度在两个域之间的分布差异)
- FAccT Algorithmic recourse: from counterfactual explanations to interventions(提出了causal recourse的概念)
- ICML WorkShop On the Fairness of Causal Algorithmic Recourse(本文在group recourse的基础上考虑了多个变量之间的相互影响即所谓的因果关系。)
- NeurIPS Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?(DA为什么需要两个encoder?)
- NeurIPS Gradual Domain Adaptation without Indexed Intermediate Domains(没有domaparameterin label的Gradual domain adaption(GDA))
- NeurIPS Implicit Semantic Response Alignment for Partial Domain Adaptation(PDA如何利用好额外类)
- NeurIPS The balancing principle for parameter choice in distance-regularized domain adaptation(如何挑选分类损失和正则化项的tradeoff parameter)
- AAAI Provable Guarantees for Understanding Out-of-distribution Detection(基于数据是高斯混合的假设给出最优density估计方式)
Before 2021
- Available at Optimization Online Kullback-Leibler Divergence Constrained Distributionally Robust Optimization(开篇之作,使用KL散度构造DRO中的uncertainty set)
- ICLR 2018 Oral Certifying Some Distributional Robustnesswith Principled Adversarial Training(基于 Wasserstein-ball构造uncertainty set,用于adversarial robustness)
- ICML 2018 Oral Does Distributionally Robust Supervised Learning Give Robust Classifiers?(DRO就一定比ERM好?不一定!必须引入额外信息)
- NeurIPS 2019 Distributionally Robust Optimization and Generalization in Kernel Methods(本文使用MMD(maximummean discrepancy)对uncertainty set进行建模,得到了MMD DRO)
- EMNLP 2019 Distributionally Robust Language Modeling(Coarse-grained mixture models在NLP中的经典案例)
- Arxiv 2019 Equalizing recourse across groups(基础的recourse测量的是单个样本,本文给出了一个group级别的recourse度量。)
- ICML 2020 Oral Continuously indexed domain adaptation(连续变化的domain)
Causality
Individual Treatment Estimation
- ICML 2017 Estimating individual treatment effect: generalization bounds and algorithms(本文第一次提出了ITE的概念,并使用DA的一套理论对其进行bound,依次设计了一套行而有效的算法。)
- NeurIPS 2019 Adapting Neural Networks for the Estimation of Treatment Effects(这篇文章的核心思想是这样的:我们没必要使用所有的协方差变量X进行adjustment。)
- PNAS 2019 Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning(本文提出了一种新的框架X-learner,当各个treatment组的数据非常不均衡的时候,这种框架非常有效。)
- AAAI 2020 Learning Counterfactual Representations for Estimating Individual Dose-Response Curves(本文提出了新的metric,新的数据集,和训练策略,允许对任意数量的treatment的outcome进行估计。)
- ICLR 2021 Oral: VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments(本文基于varying coefficient model,让每个treatment对应的branch成为treatment的函数,而不需要单独设计branch,依次达到真正的连续性。)
- Arxiv 2021 Neural Counterfactual Representation Learning for Combinations of Treatments(本文考虑更复杂的情况:多种treatment共同作用。)
- NeurIPS 2021 Spotlight On Inductive Biases for Heterogeneous Treatment Effect Estimation(本文提出了新框架FlexTENet,直接对条件因果值τ进行估计,而不是对μ1,μ2分别估计)
- NeurIPS 2021 Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms(本文分析了进来进行 individual treatment effect的各种算法范式,)
- Arxiv 2021 Cycle-Balanced Representation Learning For Counterfactual Inference
Data-Centric/Prompt
Data Centric
- AISTATS 2019 Towards Optimal Transport with Global Invariances(如何对齐两个数据集?)
- NeurIPS 2020 Geometric Dataset Distances via Optimal Transport(如何定义两个数据集之间的距离?)
- ICML 2021 Dataset Dynamics via Gradient Flows in Probability Space(如何进行数据集优化,使得两个数据集尽可能的像?)
Prompts
- ACL 2021 WARP: Word-level Adversarial ReProgramming(Continuous Prompt开篇之作)
- Arxiv 2021 StanfordPrefix-Tuning: Optimizing Continuous Prompts for Generation(Continuous Prompt用于NLG的各种任务)(将prompt用于NLG任务上)
- Arxiv 2021 GoogleThe Power of Scale for Parameter-Efficient Prompt Tuning(目前最简单的preifx training:只对input添加prefix)
- Arxiv 2021 DeepMindMultimodal Few-Shot Learning with Frozen Language Models(利用图像编码器把图像作为一种动态的prefix,与文本一起送入LM中)
Optimization/GNN/Energy/Generative/Others
Optimization
- ICML 2021 An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming
- NeurIPS 2021 Deep Structural Causal Models for Tractable Counterfactual Inference
- ICML 2018 Bilevel Programming for Hyperparameter Optimization and Meta-Learning(用bi-level programming建模超参数搜索与meta-learning)
- NeurIPS 2021 Energy-based Out-of-distribution Detection
LTH (Lottery Ticket Hypothesis)
- NeurIPS 2020: The Lottery Ticket Hypothesis for Pre-trained BERT Networks (彩票假设用于BERT fine-tune))
- ICML 2021 Oral: Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? (彩票假设用于OOD泛化)
- CVPR 2021: The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models (彩票假设用于视觉模型预训练)
Generative Model (mainly diffusion model)
- Estimation of Non-Normalized Statistical Models by Score Matching(使用分步积分(Score Matching)的方法解决非归一化分布的估计问题)
- UAI 2019 Sliced Score Matching: A Scalable Approach to Density and Score Estimation(将高维的梯度场沿随即方向投影到一维的标量场再进行score-macthing)
- NeurIPS 2019 Oral Generative Modeling by Estimating Gradients of the Data Distribution(通过添加噪声的方法,增强Langevin MCMC对低概率密度区域的建模能力)
- NeurIPS 2020 improved techniques for training score-based generative models(对score-based generative model失败案例的分析和改进,生成能力开始媲美GAN)
- NeurIPS 2020 Denoising Diffusion Probabilistic Models(除VAE, GAN, FLOW外又一生成范式)
- ICLR 2021 Outstanding Paper Award Score-Based Generative Modeling through Stochastic Differential Equations
- Arxiv 2021 Diffusion Models Beat GANs on Image Synthesis(Diffusion Models在图像和合成上超越GAN)
- Arxiv 2021 Variational Diffusion Models
Implicit Neural Representation (INR)
- NeurIPS 2020 (Oral): Implicit Neural Representations with Periodic Activation Functions
- SIGGRAPH Asia 2020: X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation
- CVPR 2021 (Oral):Learning Continuous Image Representation with Local Implicit Image Function
- CVPR 2021 Adversarial Generation of Continuous Images
- NeurIPS 2021 Learning Signal-Agnostic Manifolds of Neural Fields
- Arxiv 2021 Generative Models as Distributions of Functions