Comments (18)
Note that this repository uses different dataset splits and a slightly different model architecture than in our original paper. For an exact replication of the experiments in our paper, please have a look at this repository: https://github.com/tkipf/gcn
from pygcn.
from pygcn.
from pygcn.
from pygcn.
from pygcn.
Hello. I also run the pytorch version GCN on citeseer dataset and the accuracy is 69.65%. Furthermore, the accuracy differed everytime on the cora dataset provided by this repository. However, the accuracy is invariable on datasets provided by the original GCN repository https://github.com/tkipf/gcn.
I can't understand why. I really want to know whether someone else got the same results.
Thank you.
Dear professor,
Hello!
I am very interesting in your recent GCN work.
Thanks for sharing the code, I used the GCN network to run the citeseer database, but the accuracy could not reach 70.3. How did you set the parameters to run so high? Thanks a lot for sharing the code, anyway.Many thanks for your help.
from pygcn.
from pygcn.
Dear Thomas, if you have time could you elaborate just a bit on the "implementation changes" in the Pytorch version you mentioned above? I'm not necessarily interested in the cora data or those results but more interested in training on other graphs/datasets. Maybe even a parallel version at some point down the road using Horovod or Keras.
Thanks
from pygcn.
from pygcn.
Thanks Again!!
from pygcn.
One of the most important reason I think is that there is no available api in pytorch by which the dropout on sparse input can be implemented. In GCN-tensorflow-version Dr. Kipf implement sparse dropout by tf.sparse_retain but this api can not be found in torch. Because dropout is an important hyperparameterfor GCN (empirically) thus we may not have the ability to recover the accuracy without solving this problem.
Hello. I also run the pytorch version GCN on citeseer dataset and the accuracy is 69.65%. Furthermore, the accuracy differed everytime on the cora dataset provided by this repository. However, the accuracy is invariable on datasets provided by the original GCN repository https://github.com/tkipf/gcn.
I can't understand why. I really want to know whether someone else got the same results.
Thank you.Dear professor,
Hello!
I am very interesting in your recent GCN work.
Thanks for sharing the code, I used the GCN network to run the citeseer database, but the accuracy could not reach 70.3. How did you set the parameters to run so high? Thanks a lot for sharing the code, anyway.
Many thanks for your help.
from pygcn.
from pygcn.
Dear professor,
Hello!
I am very interesting in your recent GCN work.
Thanks for sharing the code, I used the GCN network to run the citeseer database, but the accuracy could not reach 70.3. How did you set the parameters to run so high? Thanks a lot for sharing the code, anyway.Many thanks for your help.
Hello, I want to know where can I download the citeseer dataset in the form similar to cora dataset in this implementation(citeseer.cites, citeseer.content).
Thank you a lot!
from pygcn.
Hello. I also run the pytorch version GCN on citeseer dataset and the accuracy is 69.65%. Furthermore, the accuracy differed everytime on the cora dataset provided by this repository. However, the accuracy is invariable on datasets provided by the original GCN repository https://github.com/tkipf/gcn.
I can't understand why. I really want to know whether someone else got the same results.
Thank you.Dear professor,
Hello!
I am very interesting in your recent GCN work.
Thanks for sharing the code, I used the GCN network to run the citeseer database, but the accuracy could not reach 70.3. How did you set the parameters to run so high? Thanks a lot for sharing the code, anyway.
Many thanks for your help.
Hello!
Have you found the reason why the accuracy differed everytime on the cora dataset provided by this repository? If you know the reason, please tell me, thank you.
from pygcn.
Note that this repository uses different dataset splits and a slightly different model architecture than in our original paper. For an exact replication of the experiments in our paper, please have a look at this repository: https://github.com/tkipf/gcn
Dear Dr Kipf,
I am a big fan of your works and really interested in this code you shared.
Regarding Citeseer dataset, I have download it from https://github.com/kimiyoung/planetoid, which hopefully is the same data you ahve used.
My problem is reading this file and specifying which file creates graph and which one makes edges.
Can you please elaboarte it more.
Thank you in advance.
Cheers,
from pygcn.
Dear professor,
Hello!
I am very interesting in your recent GCN work.
Thanks for sharing the code, I used the GCN network to run the citeseer database, but the accuracy could not reach 70.3. How did you set the parameters to run so high? Thanks a lot for sharing the code, anyway.Many thanks for your help.
Regarding Citeseer dataset, I have download it from https://github.com/kimiyoung/planetoid, which hopefully is the same data you ahve used.
My problem is reading this file and specifying which file creates graph and which one makes edges.
Can you please elaboarte it more.
Thank you in advance.
from pygcn.
Dear professor, Hello! It makes sense that you load the cora dataset this way and construct the adjacency matrix. idx_features_labels = np.genfromtxt("{}{}.content".format(path, dataset), dtype=np.dtype(str)) print(idx_features_labels.shape) features = sp.csr_matrix(idx_features_labels[:, 1:-1], dtype=np.float32) labels = encode_onehot(idx_features_labels[:, -1]) # build graph idx = np.array(idx_features_labels[:, 0], dtype=np.int32) idx_map = {j: i for i, j in enumerate(idx)} edges_unordered = np.genfromtxt("{}{}.cites".format(path, dataset), dtype=np.int32) edges = np.array(list(map(idx_map.get, edges_unordered.flatten())), dtype=np.int32).reshape(edges_unordered.shape) adj = sp.coo_matrix((np.ones(edges.shape[0]), (edges[:, 0], edges[:, 1])), shape=(labels.shape[0], labels.shape[0]), dtype=np.float32) How do you load data for Cornell datasets in a WebKB dataset? The dataset can be seen in the attachment.I hope to get your help. Thank you very much!
Regarding Citeseer dataset, I have download it from https://github.com/kimiyoung/planetoid, which hopefully is the same data you ahve used.
My problem is reading this file and specifying which file creates graph and which one makes edges.
Can you please elaboarte it more.
Thank you in advance.
from pygcn.
Hello. I also run the pytorch version GCN on citeseer dataset and the accuracy is 69.65%. Furthermore, the accuracy differed everytime on the cora dataset provided by this repository. However, the accuracy is invariable on datasets provided by the original GCN repository https://github.com/tkipf/gcn.
I can't understand why. I really want to know whether someone else got the same results.
Thank you.Dear professor,
Hello!
I am very interesting in your recent GCN work.
Thanks for sharing the code, I used the GCN network to run the citeseer database, but the accuracy could not reach 70.3. How did you set the parameters to run so high? Thanks a lot for sharing the code, anyway.
Many thanks for your help.Hello!
Have you found the reason why the accuracy differed everytime on the cora dataset provided by this repository? If you know the reason, please tell me, thank you.
Regarding Citeseer dataset, I have download it from https://github.com/kimiyoung/planetoid, which hopefully is the same data you ahve used.
My problem is reading this file and specifying which file creates graph and which one makes edges.
Can you please elaboarte it more.
Thank you in advance.
from pygcn.
Related Issues (20)
- specifying modes train, validation and test HOT 1
- Where is Filter parameters in the code? HOT 1
- Hi, does pandas make the data preprocessing more simple?
- Normalization of features, batch-wise training, feature extraction
- question about the adjacency matrix HOT 2
- citeseer dataset seems doesnot work HOT 2
- Difference between TF and Pytorch version code HOT 5
- In tensor flow code you used early stopping,isn't it needed in pytorch???
- In `utils.py` line 36, wouldnt `adj = adj + (adj.T > adj)` also work? HOT 2
- Invoice node classification / meta-data extraction / single prediction with trained model
- How to do a semi-supervised learning? HOT 6
- Predicting node degree
- Question About fastmode
- Error: 'pybind11' must be installed before running the build.
- transform to other scope dataset
- Why do row normalization instead of column normalization? HOT 2
- About the dataset split HOT 3
- citeseer dataset
- Cora dataset attributes
- accuracy in the experimental results HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pygcn.