Giter VIP home page Giter VIP logo

graphgan's People

Contributors

hwwang55 avatar suanrong avatar wangjialin114 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphgan's Issues

Why are the accuracies so low in your paper?

1.I found the accuracies of node classification are only about 20% for all algorithms. But how to define "accuracy" of a muti-label dataset?
2.I remembered that if training:test = 9:1 then the F1-macro of deep walk should be around 0.28 in its paper...
Did I misunderstand something?

I have two questions

1.reward是什么?这似乎是强化学习的内容,论文里也没提到过,我想知道reward是怎么定义的?
2.link_prediction里并没有用到bias,这是为什么呢?

  1. What is reward? it seems to be the content of reinforcement learning, and it is not mentioned in the paper. I want to know how to get the reward?
  2. Bias is not used in link_prediction, why is this?

公式7->公式8的推导问题

作者你好,我推导了你的公式7->公式8的过程,似乎觉得有问题,以v为根节点的子树,vi不是应该距离vc更远吗,难道公式8不应该是j从1->(m+1)?而且你公式7对G的定义中最后一步还要乘从叶子节点到父节点的转移概率pc(vrm−1 |vrm ),而你的公式8似乎没表现出这一点,是我的推导有问题吗?

What are the major updates in new code

In the earlier version of your code I was getting better accuracies on various datasets I tried.
Was there something wrong in your previous code?
Can you please tell me the changes made by you.

Evaluation of the link prediction for D and G

I see you treat D and G equally in the evaluation of the link prediction. You used both the learned embedding matrix of D and G to evaluate the accuracy for the link prediction (Fig 4 in the paper). As I know, the role of D and G is contrasted to each other, why are they treated equally in this task? Thank you!

final learned vertex representations are gi’s

请问,论文中一个点的表达有两组,分别对应于生成器和判别器:

  1. 这样的motivation是什么?能不能只有一组,生成器和判别器共用?
  2. 代码里面使用两组表达计算预测结果,最后的结果是只选了生成器的表达吗?有没有尝试拼接表达或者其他?

pretrained embeddings

Can you introduce the strategy for generating pre-trained embeddings and its effect on the results?
If we adopt a random or gaussian strategy to initalize the embedding, can the current model get the similar results?

关于emb初始化

亲爱的作者,你好:
有一个小小的疑惑,请教一下作者。
在CA-GrQc数据集中,总的节点数是5242,为什么在CA-GrQc_pre_train.emb只给出5119个节点的向量?虽然最后用随机初始了差的节点的初始值,但是我还是有一点疑惑的,为啥不一开始就全部给出。
麻烦作者帮忙解答一下,谢谢了。

Number of Nodes in the pre-training embedding file

It seems that the number of nodes for which the node-embeddings are realized, are not equal to the number of nodes of the network. This is causing the accuracy to be lower than what is mentioned in the paper. If anyone has used DeepWalk to generate their own embeddings, could you share them with me? @hwwang55

Error of reading the pre_trained embedding?

File "/home/hxm/link_prediciton/graphutils.py", line 83, in read_embeddings
embedding_matrix[int(emd[0]), :] = str_list_to_float(emd[1:])
IndexError: index 34 is out of bounds for axis 0 with size 34

This error is always reported when I run my own network and read the pre_trained embedding for the network. Does anyone know the reason?

sample of data

Hi,
Thanks for sharing the code!
Could you please upload a sample of the data used in your experiments to see the entire workflow of the technique? Or alternately the format of the data that the model requires? It's hard to do experiments without having some information about this

No module named src

为什么我运行时会报这个错误呢?
Traceback (most recent call last):
File "graph_gan.py", line 11, in
from src import utils
ModuleNotFoundError: No module named 'src'

Is the reward wrong?

Line 222 in graph_gan.py, the reward is the running results of discriminator.score, why not use discriminator.reward?

Inconsistency of Accuracy results

Hello, I experimented with your code, on running the example dataset of caGrQc for link prediction task. I could only achieve

gen:0.5659075224292616
dis:0.7060041407867494

with the default configuration.
However, your paper reported much higher accuracy values, can you explain why?

code for recommendation

Hi,
Thanks for sharing the code!
could you share the code for the recommendation too??
thank you

About Generative Probability

In the paper, the update in G steps is according to formula 4, in which generative probability is defined by the product of several probability along the way from node v_c to node v. However, in the program, it only considers the nodes in the window to update generator. I am wandering whether this approximation is plausible?

If only using the sampled node without using the path over it, the efficiency would be quite low.

The pre-trained model can't achieve the accuracy presented in the paper

Hi hwwang55, I cloned the repo and wrote the code below to evaluate the performance of pre-trained model on CA-GrQc test set, but only got 75.98% using generator embedding, which is far away from 84.9% stated in the paper. Is there anything wrong?

n_node = 5242
def evaluation():
    lpe = lp.LinkPredictEval(
        config.pretrain_emb_filename_g,
        config.test_filename,
        config.test_neg_filename,
        n_node,
        config.n_emb
    )
    result = lpe.eval_link_prediction()
    print(result)

if __name__ == '__main__':
    evaluation()

Generator and Discriminator embeddings

What are the generator and discriminator embeddings that we then write in the separate .emb files? I don't understand why are these being generated during the training, and what is their use?? Someone please explain it to me

A serious BUG?

When I try to print loss for D and G, I find after first epoch there is no input into training any more.
I think the problem in the for loop.
If you want as the comments said generate new nodes every inter epoch, then you should delete if statement. Because that only generate on the first epoch. After that the input is rewrite to empty vector from second epoch.
Or you can put initialized empty vector out of for loop and use that input for every epoch.
That might be we all cant reproduce the results you presented in your paper.
Hope this got fixed.

# D-steps
for d_epoch in range(config.n_epochs_dis):
    center_nodes = []
    neighbor_nodes = []
    labels = []

    # generate new nodes for the discriminator for every dis_interval iterations
    if d_epoch % config.dis_interval == 0:
       center_nodes, neighbor_nodes, labels = self.prepare_data_for_d()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.