fork123aniket/Relational-Graph-Attention-from-Scratch
Implementation of Relational Graph Attention operator for heterogeneous graphs in PyTorch
Relational Graph Attention from Scratch
This repository provides Relational (heterogeneous) Graph Attention (RGAT) operator implementation from scratch. This implementation is, as the name suggests, meant only for relational (simple/property/attributed) graphs. Here, two schemes have been implemented to compute attention logits
Additive attention
or multiplicative attention
where:
Here,
Two different attention mechanisms have also been provided:-
- Within-relation attention mechanism
- Across-relation attention mechanism
To ensure better discriminative power for RGATs, the following options have also been made available:-
- additive:
- scaled:
- f-additive:
- f-scaled:
where
More in-depth information about this implementation is available on PyTorch Geometric Official Website.
Requirements
PyTorchPyTorch Geometric
Usage
Data
Though the example.py file contains the path to one of the relational entities graphs (AIFB), this implementation works for other heterogeneous graph datasets such as MUTAG, BGS, AM, etc. The AIFB dataset contains no. of nodes (8285), edges (58086), and classes (4).
Training and Testing
- The layer implementation can be seen inside
rgat_conv.py. - To train and test RGATs on heterogeneous graphs, run
example.py, and this file, after every epoch, printstrainas welltestaccuracies.