hwwang55 / graphgan Goto Github PK
View Code? Open in Web Editor NEWA tensorflow implementation of GraphGAN (Graph Representation Learning with Generative Adversarial Nets)
License: MIT License
A tensorflow implementation of GraphGAN (Graph Representation Learning with Generative Adversarial Nets)
License: MIT License
1.I found the accuracies of node classification are only about 20% for all algorithms. But how to define "accuracy" of a muti-label dataset?
2.I remembered that if training:test = 9:1 then the F1-macro of deep walk should be around 0.28 in its paper...
Did I misunderstand something?
1.reward是什么?这似乎是强化学习的内容,论文里也没提到过,我想知道reward是怎么定义的?
2.link_prediction里并没有用到bias,这是为什么呢?
作者你好,我推导了你的公式7->公式8的过程,似乎觉得有问题,以v为根节点的子树,vi不是应该距离vc更远吗,难道公式8不应该是j从1->(m+1)?而且你公式7对G的定义中最后一步还要乘从叶子节点到父节点的转移概率pc(vrm−1 |vrm ),而你的公式8似乎没表现出这一点,是我的推导有问题吗?
In the earlier version of your code I was getting better accuracies on various datasets I tried.
Was there something wrong in your previous code?
Can you please tell me the changes made by you.
I see you treat D and G equally in the evaluation of the link prediction. You used both the learned embedding matrix of D and G to evaluate the accuracy for the link prediction (Fig 4 in the paper). As I know, the role of D and G is contrasted to each other, why are they treated equally in this task? Thank you!
巨佬您好,我超崇拜您!近两年来一直在仰望您!您在Paperweekly的分享中提到过,GraphGAN中节点是无属性且不带标签的,冒昧请教一下,paper的节点分类数据集是使用的哪一个,如何做的呢?在GraphGAN中添加属性和标签是否具有可行性呢?恳请不吝赐教![email protected]
请问,论文中一个点的表达有两组,分别对应于生成器和判别器:
I have another question.I want to know if I hava a new dataset and how can i get the negetive edges?
looking forward your reply.thanks!
Can you introduce the strategy for generating pre-trained embeddings and its effect on the results?
If we adopt a random or gaussian strategy to initalize the embedding, can the current model get the similar results?
亲爱的作者,你好:
有一个小小的疑惑,请教一下作者。
在CA-GrQc数据集中,总的节点数是5242,为什么在CA-GrQc_pre_train.emb只给出5119个节点的向量?虽然最后用随机初始了差的节点的初始值,但是我还是有一点疑惑的,为啥不一开始就全部给出。
麻烦作者帮忙解答一下,谢谢了。
It seems that the number of nodes for which the node-embeddings are realized, are not equal to the number of nodes of the network. This is causing the accuracy to be lower than what is mentioned in the paper. If anyone has used DeepWalk to generate their own embeddings, could you share them with me? @hwwang55
File "/home/hxm/link_prediciton/graphutils.py", line 83, in read_embeddings
embedding_matrix[int(emd[0]), :] = str_list_to_float(emd[1:])
IndexError: index 34 is out of bounds for axis 0 with size 34
This error is always reported when I run my own network and read the pre_trained embedding for the network. Does anyone know the reason?
Hi,
Thanks for sharing the code!
Could you please upload a sample of the data used in your experiments to see the entire workflow of the technique? Or alternately the format of the data that the model requires? It's hard to do experiments without having some information about this
but the result is different from paper, I think maybe some error in my code, especially recomendation
It’s best if someone else has done it too,I wanna get your advice
你好!源码中似乎没有results这个文件夹?
GraphGAN/src/GraphGAN/graph_gan.py
Line 355 in b56ed5c
Hi,
In this line, generator embedding is written in discriminator and vice versa.
Can you please verify ?
为什么我运行时会报这个错误呢?
Traceback (most recent call last):
File "graph_gan.py", line 11, in
from src import utils
ModuleNotFoundError: No module named 'src'
Line 222 in graph_gan.py, the reward is the running results of discriminator.score, why not use discriminator.reward?
Hello, I experimented with your code, on running the example dataset of caGrQc for link prediction task. I could only achieve
gen:0.5659075224292616
dis:0.7060041407867494
with the default configuration.
However, your paper reported much higher accuracy values, can you explain why?
Hi,
Thanks for sharing the code!
could you share the code for the recommendation too??
thank you
In the paper, the update in G steps is according to formula 4, in which generative probability is defined by the product of several probability along the way from node v_c to node v. However, in the program, it only considers the nodes in the window to update generator. I am wandering whether this approximation is plausible?
If only using the sampled node without using the path over it, the efficiency would be quite low.
Hi hwwang55, I cloned the repo and wrote the code below to evaluate the performance of pre-trained model on CA-GrQc test set, but only got 75.98% using generator embedding, which is far away from 84.9% stated in the paper. Is there anything wrong?
n_node = 5242
def evaluation():
lpe = lp.LinkPredictEval(
config.pretrain_emb_filename_g,
config.test_filename,
config.test_neg_filename,
n_node,
config.n_emb
)
result = lpe.eval_link_prediction()
print(result)
if __name__ == '__main__':
evaluation()
What are the generator and discriminator embeddings that we then write in the separate .emb files? I don't understand why are these being generated during the training, and what is their use?? Someone please explain it to me
When I try to print loss for D and G, I find after first epoch there is no input into training any more.
I think the problem in the for loop.
If you want as the comments said generate new nodes every inter epoch, then you should delete if statement. Because that only generate on the first epoch. After that the input is rewrite to empty vector from second epoch.
Or you can put initialized empty vector out of for loop and use that input for every epoch.
That might be we all cant reproduce the results you presented in your paper.
Hope this got fixed.
# D-steps
for d_epoch in range(config.n_epochs_dis):
center_nodes = []
neighbor_nodes = []
labels = []
# generate new nodes for the discriminator for every dis_interval iterations
if d_epoch % config.dis_interval == 0:
center_nodes, neighbor_nodes, labels = self.prepare_data_for_d()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.