Comments (5)
Hi Thomas,
Thanks for your excellent work. However, I am still confused about the difference between the TF and PyTorch version code. As you have mentioned before, the two major differences are that firstly, the Pytorch version code did not use the first layer Dropout, and secondly, the Pytorch version code uses a different way to normalize the adjacent matrix.
I change your Pytorch code to the following form:
r_inv = np.power(rowsum, -0.5).flatten()
mx = mx.dot(r_mat_inv).transpose().dot(r_mat_inv)
and add one dropout layer in the forward function,
x = F.dropout(x, self.dropout, training=self.training)
However, the experiment result still looks quite different. Did I miss some important points?
Thanks for your time!
from pygcn.
from pygcn.
Maybe you're using different dataset splits? Note that the default dataset loaders are different in both repositories (which, in hindsight, was an unfortunate choice).
…
On Tue, Nov 17, 2020 at 6:42 PM BonitoW @.***> wrote: Hi Thomas, Thanks for your excellent work. However, I am still confused about the difference between the TF and PyTorch version code. As you have mentioned before, the two major differences are that firstly, the Pytorch version code did not use the first layer Dropout, and secondly, the Pytorch version code uses a different way to normalize the adjacent matrix. I change your Pytorch code to the following form: r_inv = np.power(rowsum, -0.5).flatten() mx = mx.dot(r_mat_inv).transpose().dot(r_mat_inv) and add one dropout layer in the forward function, x = F.dropout(x, self.dropout, training=self.training) However, the experiment result still looks quite different. Did I miss some important points? Thanks for your time! — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#69 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABYBYYHMSNNYTJ24CAZO37TSQKYWTANCNFSM4TY36O7A .
Thanks for your reply! In fact, I noticed that your default split function is different. Actually, I use the split function reported in another paper FastGCN (https://arxiv.org/abs/1801.10247), which means for the Cora dataset, the first 1208 samples for training and the last 1000 samples for testing. However, the result of the TF version is 0.86, while the Pytorch version code is only 0.82.
from pygcn.
Maybe you're using different dataset splits? Note that the default dataset loaders are different in both repositories (which, in hindsight, was an unfortunate choice).
…
On Tue, Nov 17, 2020 at 6:42 PM BonitoW @.***> wrote: Hi Thomas, Thanks for your excellent work. However, I am still confused about the difference between the TF and PyTorch version code. As you have mentioned before, the two major differences are that firstly, the Pytorch version code did not use the first layer Dropout, and secondly, the Pytorch version code uses a different way to normalize the adjacent matrix. I change your Pytorch code to the following form: r_inv = np.power(rowsum, -0.5).flatten() mx = mx.dot(r_mat_inv).transpose().dot(r_mat_inv) and add one dropout layer in the forward function, x = F.dropout(x, self.dropout, training=self.training) However, the experiment result still looks quite different. Did I miss some important points? Thanks for your time! — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#69 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABYBYYHMSNNYTJ24CAZO37TSQKYWTANCNFSM4TY36O7A .
Hi! Thanks for your reply. Do you mean the sequence of the two data files is different? I try to print out the feature matrix and found they are different.
from pygcn.
Hi Thomas! Thanks for your reply! In fact, I found that the problem is just that the sequence of the two data files is different. I use the data file reported in your TF version code and the final result comes out as the same as the one in the TF version. Besides, the performance will be better if I use the adjacent matrix preprocessing method reported in your paper, which is like a kind of hyperparameter tuning.
from pygcn.
Related Issues (20)
- specifying modes train, validation and test HOT 1
- Where is Filter parameters in the code? HOT 1
- Hi, does pandas make the data preprocessing more simple?
- Normalization of features, batch-wise training, feature extraction
- question about the adjacency matrix HOT 2
- citeseer dataset seems doesnot work HOT 2
- In tensor flow code you used early stopping,isn't it needed in pytorch???
- In `utils.py` line 36, wouldnt `adj = adj + (adj.T > adj)` also work? HOT 2
- Invoice node classification / meta-data extraction / single prediction with trained model
- How to do a semi-supervised learning? HOT 6
- Predicting node degree
- Question About fastmode
- Error: 'pybind11' must be installed before running the build.
- transform to other scope dataset
- Why do row normalization instead of column normalization? HOT 2
- About the dataset split HOT 3
- citeseer dataset
- Cora dataset attributes
- accuracy in the experimental results HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pygcn.