gmum / geo-gcn Goto Github PK
View Code? Open in Web Editor NEWThe official implementation of the SGCN architecture.
License: MIT License
The official implementation of the SGCN architecture.
License: MIT License
I tried to reproduce your accuracies from the paper, but running the vanilla model straight from the repo( without any tweaks), doesn't look like its learning:
Epoch: 001, Loss: 6.97393, Train Acc: 0.10218, Test Acc: 0.10100
Epoch: 002, Loss: 2.30191, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 003, Loss: 2.30144, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 004, Loss: 2.30132, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 005, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 006, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 007, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 008, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 009, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 010, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 011, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 012, Loss: 2.30125, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 013, Loss: 2.30821, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 014, Loss: 2.30259, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 015, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 016, Loss: 2.30351, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 017, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 018, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 019, Loss: 2.30125, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 020, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 021, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 022, Loss: 2.30125, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 023, Loss: 2.30119, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 024, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 025, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 026, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 027, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 028, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 029, Loss: 2.30232, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 030, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 031, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 032, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 033, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 034, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 035, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 036, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 037, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
I tried with and without data augmentations and varied learning rate as well, but no success in fixing it.
Hi, I am really interested in your method. I am attempting to apply it to some chemical data of my own.
In your paper, you mentioned that you used this method on regression problems for the ESOL and FreeSolv datasets. Do you have the code for that posted somewhere? There are multiple places in your repository that assume a classification problem.
Thank you!
In order to reproduce the work on Chemicals:
Can you share a example of data preparation ?
how look like your data preparation dataloader.
BR
Guillaume
Hi, thank you very much for the package.
Could you comment on the running times? I cloned the repo and ran the train_models MNIST
, it loads the dataset and starts training, but (1) training takes forever (30 cores CPU -- no GPU available) and (2) it doesn't seem to improve after a few epochs:
$ python train_models.py MNISTSuperpixels
True
Epoch: 001, Loss: 7.64400, Train Acc: 0.10218, Test Acc: 0.10100
Epoch: 002, Loss: 2.30219, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 003, Loss: 2.30138, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 004, Loss: 2.30126, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 005, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 006, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 007, Loss: 2.30123, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 008, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 009, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
Epoch: 010, Loss: 2.30124, Train Acc: 0.11237, Test Acc: 0.11350
It took ~2days for training 10 epochs. Any help is much appreciated.
Hello!
I tried to run the code with chemical datasets. However there are always some dimension problems with the SpatialGraphConv layer. For instance I ran the 'freesolv' dataset and in the line 'aggr_out = self.lin_out(aggr_out' there is the problem.
RuntimeError: mat1 and mat2 shapes cannot be multiplied (597x1600 and 64x64)
Did you come across similar problem and do you have any idea how to solve it? Note that I used default parameters only and changed the train_model with the lines you suggested in Readme file.
By the way, I also wish to put edge_attr feature in the conv_layer. Could you point out where I can do so?
Thanks a lot. I'm trying to apply this to my current work :)
Is there any way that I can replace the nn.Conv2d using the sgcn layer?
Hello!
I'm trying to run the MNIST example, but running into the following error. Any insights?
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
82
83 for epoch in range(1, 20):
---> 84 loss = train(epoch)
85 train_acc = test(train_loader)
86 test_acc = test(test_loader)
in train(epoch)
59 # data = rotation_2(data)
60 print(data.edge_index)
---> 61 output = model(data)
62 loss = F.nll_loss(output, data.y)
63 loss.backward()
/opt/tljh/user/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
in forward(self, data)
24 def forward(self, data):
25 for i in range(self.layers_num):
---> 26 data.x = self.conv_layers[i](data.x, data.pos, data.edge_index)
27
28 if self.use_cluster_pooling:
/opt/tljh/user/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
in forward(self, x, pos, edge_index)
22 edge_index = add_self_loops(edge_index, num_nodes=x.size(0)) # num_edges = num_edges + num_nodes
23
---> 24 return self.propagate(edge_index=edge_index, x=x, pos=pos, aggr='add') # [N, out_channels, label_dim]
25
26 def message(self, pos_i, pos_j, x_j):
~/.local/lib/python3.6/site-packages/torch_geometric/nn/conv/message_passing.py in propagate(self, edge_index, size, **kwargs)
165 assert len(size) == 2
166
--> 167 kwargs = self.collect(edge_index, size, kwargs)
168
169 msg_kwargs = self.distribute(self.msg_params, kwargs)
~/.local/lib/python3.6/site-packages/torch_geometric/nn/conv/message_passing.py in collect(self, edge_index, size, kwargs)
113
114 self.set_size(size, idx, data)
--> 115 out[arg] = data.index_select(self.node_dim, edge_index[idx])
116
117 size[0] = size[1] if size[0] is None else size[0]
TypeError: index_select() received an invalid combination of arguments - got (int, NoneType), but expected one of:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.