1. intro

self-supervised pre-train + few-shot on down-tasks

之前的图生成网络VGAE等,不适合做pre-train:

  1. 没有对特征进行生成;
  2. 不适用于大规模图;

frame-work:

截屏2023-05-15 上午9.50.14.png

contribution:

  1. decompose the attributed graph generation into 2 parts, namely attribution and edge generation and the 目标函数: 两者共同的极大似然
  2. 高效的框架;

2.

和以往的图生成不同,GPT-GNN的预训练有着a harder graph task and thus can guide the model to learn more complex semantics and structure of the input graph

Our work is different with Strategies for Pre-training GNN[STRATEGIES FOR PRE-TRAINING GRAPH NEURAL NETWORKS ] as our goal is to pre-train GNNs over a single (large-scale) graph and conduct the node-level transfer.