Hi all,
Thanks for the great work!
I noticed a potential inconsistency between the implementation of the _NodeConv class and the equation described in the paper.
Current implementation:
def forward(self, graph, nfeat, efeat):
h = (1 + self.eps) * nfeat
h = self.gconv(graph, h, efeat)
h = self.mlp(h)
h = F.leaky_relu(self.batchnorm(h))
return h
However, according to the equation in the paper:
It seems that the residual connection is added after the graph convolution, which might suggest that the code should be:
def forward(self, graph, nfeat, efeat):
h = self.gconv(graph, nfeat, efeat) + self.eps * nfeat
h = self.mlp(h)
h = F.leaky_relu(self.batchnorm(h))
return h
Could you please confirm whether this is an intentional design choice or a possible deviation from the original formulation?
Thanks!