# 读论文

## Heterogeneous Graph Transformer

### General GNN

$$l-1$$ 层到 $$l$$ 层：

$H^l[t] \leftarrow \underset{\forall s \in N(t), \forall e \in E(s, t)}{\text { Aggregate }}\left(\operatorname{Extract}\left(H^{l-1}[s] ; H^{l-1}[t], e\right)\right)$

### HETEROGENEOUS GRAPH TRANSFORMER

Its idea is to use the meta relations of heterogeneous graphs to parameterize weight matrices for the heterogeneous mutual attention, message passing, and propagation steps.

### general attention-based GNNs

$H^l[t] \leftarrow \underset{\forall s \in N(t), \forall e \in E(s, t)}{\text { Aggregate }}(\text { Attention }(s, t) \cdot \text { Message }(s))$

\begin{aligned} \operatorname{Attention}_{G A T}(s, t) & =\underset{\forall s \in N(t)}{\operatorname{Softmax}}\left(\vec{a}\left(W H^{l-1}[t] \| W H^{l-1}[s]\right)\right) \\ \operatorname{Message}_{G A T}(s) & =W H^{l-1}[s] \\ \operatorname{Aggregate}_{G A T}(\cdot) & =\sigma(\operatorname{Mean}(\cdot)) \end{aligned}

### HGT计算方式

$\begin{gathered} \operatorname{Head}_k^{A T T}(i, j)=\left(\frac{\mathbf{K}_i^k \mathbf{W}_{\psi(i, j)}^{A T T} \mathbf{Q}_j^{k^{\mathrm{T}}}}{\sqrt{d}}\right) \mu(\phi(i), \psi(i, j), \phi(j)) \\ \operatorname{Attention}(i, j)=\operatorname{Softmax}_{i \in N(j)}\left(\|_k \operatorname{Head}_k^{A T T}(i, j)\right) \end{gathered}$

$\begin{gathered} \operatorname{Message}(i, j)=\|_k \mathbf{W}_{\phi(i)}^k \mathbf{h}_i \mathbf{W}_{\psi(i, j)}^{M S G} \\ \mathbf{h}_j=\sum_{i \in N(j)} \operatorname{Attention}(i, j) \odot \operatorname{Message}(i, j) \end{gathered}$

## Heterogeneous graph neural networks analysis: a survey of techniques, evaluations and applications

Metapath：元路径

Heterogeneous Graph Embedding

### convolution-based HGNN

#### HAN

HAHE：uses cosine similarity instead of attention mechanism to calculate the two kinds of importance

MAGNN:把metapath的中间节点也用encoder存进了semantic信息，不过他encoder到底怎么工作的没仔细说不太懂

#### GTN

Graph Transformer Networks，分出subgraphs然后学习embedding

But GTN only considers edge types with ignoring diferent types of nodes.

#### HetSANN

Heterogeneous Graph Structural Attention Neural Network

#### HGT

HetSANN和HGT都使用分层注意力机制来代替metapath，但是生成了更多的参数

### Autoencoder‑based approaches

#### HIN2Vec

conceptualNN 本来是想算出i和j对每种关系的概率，但是成本过高，于是尝试算出i和j有特定关系r的概率

$P(r|i,j) = sigmoid\left(\sum\mathbf{W}_I\vec i \cdot \mathbf{W}_J\vec j\cdot\mathbf{W}_R\vec r\right)$

#### SHINE

utilizes the topological structure of heterogeneous graphs

#### HEGAN

discriminator:

$D\left(\mathbf{h}_j \mid i, r ; \theta^D\right)=\frac{1}{1+\exp \left(-\mathbf{h}_i^D \mathbf{M}_r^D \mathbf{h}_j^D\right)}$

generator:

$G\left(i, r ; \theta^G\right)=\sigma\left(\mathbf{W}_L \cdots \sigma\left(\mathbf{W}_1 \mathbf{h}+\mathbf{b}_1\right)+\mathbf{b}_L\right)$

### Dynamic heterogeneous graph learning

posted @ 2023-08-26 13:26  lcyfrog  阅读(68)  评论(0编辑  收藏  举报