Graph Pre-train

Learn2 Pre-train GNN

MAML on Graph;

STRATEGIES FOR PRE-TRAINING GRAPH NEURAL NETWORKS

First Node-level then Graph-level pre-train;

KEG’s tutorial:Self-supervised Learning and Pre-training on Graphs (GNNs)

Tutorial on Graph SSL and Pre-train

Infoadv(袁一歌)

Mutual Info 的角度来提高图对比学习方法中向下游任务的泛化能力

GRACE

GPT-GNN: Pre-train on Large-Scale-Graph to easy the downstream node-level tasks【2020】

Infoadv的参考之一,利用InfoMax原则做为对比的指导

生成式的图自监督策略

When to Pre-train?

以数据为中心,分析哪些预训练数据对下游任务是有益的

PRODIGY: Enabling In-context Learning Over Graphs

对比LLM的k-shot prompt形式,做“适图化”,实现图上的pre-train+prompt

GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks

unify the downstream and pre-training task as subgraph-sim-calculation and use a learnable prompt in readout for downstream tasks;

ParetoGNN: Multi-Task SSL on GNN

All in one: Multi-task Prompting for GNNs

KDD2023 Best Paper

Domain Generalization

Domain Generalization Survey

Distribution Free Domain Generalization

2023 ICML; kernel based; optim the metric’s weights to prevent one domain to lead the pre-train process;

Blanchard2011、DFDG & MDA 泛化能力分析

通过核函数把原空间的样本映射到一个RKHS,并找一个降维变换使得domain间的discrepency小,而类别之间的discrepency大,并利用RKHS的再生性质(kernel trick)来从代数上解决优化问题。

Domain Generalization with Adversarial Feature Learning

2018ECCV;source domian共享AAE的基础上添加了MMD正则项,使得domain之间的差异减小;AAE部分的则通过引入给定的先验分布约束了所有隐空间的分布 ,使得编码器可以泛化到unseen的target domain。【没有考虑P(Y|X)domain可变的情况】

Domain Generalization via Entropy Regularization