Comments (5)
Hi, it is for graph classification = image classification. But in principle it can also be used for node classification, which will make it equivalent to the semantic segmentation problem.
from graph_attention_pool.
To get the mn
and sd
values I used this function to go over the entire dataset and collect global statistics:
https://github.com/bknyaz/graph_attention_pool/blob/master/utils.py#L188. Note that I'm using sd[sd < 1e-2] = 1
to avoid dividing by small numbers and blowing up node features.
Then during training and testing I use this mn
and sd
as here: https://github.com/bknyaz/graph_attention_pool/blob/master/train_test.py#L113
So there is no need to calculate statistics for each image. The steps are:
- compute global statistics
mn
,sd
(this is done once before training) - during training/testing for each node (superpixel) with features x (
sp_coord
,sp_intensity
) you compute(x-mn)/sd
But you can try computing statistics mn
, sd
for each image instead of using global ones. It can result in better performance in some cases. I haven't tried this in this project.
from graph_attention_pool.
Sorry I didn't notice this issue. Are you still having this problem?
To create mnist_75sp_train
I use the script extract_superpixels.py
: python extract_superpixels.py -s train -t 4
.
Let me know if you need futher help.
from graph_attention_pool.
@bknyaz
Hi, I wonder that super-pixels dataset preparation is founded for node classification or graph classification ?
from graph_attention_pool.
@bknyaz
Thanks. I am preparing my own data with superpixel scheme and following the repo homepage instruction.
I notice these lines:
# mean and std computed for superpixel features in the training set
mn = torch.tensor([0.11225057, 0.11225057, 0.11225057, 0.44206527, 0.43950436]).view(1, 1, -1)
sd = torch.tensor([0.2721889, 0.2721889, 0.2721889, 0.2987583, 0.30080357]).view(1, 1, -1)
How to calculate mean and std for node features? Which step should I start to work these out?
While the sp_intensity
and sp_coord
in single image has been calculated,
for the whole dataset (B, C, W, H) I calculated mean and std via np.mean/np.std(data, axis=(0, 2, 3))
thus I got:
[0.47861066 0.49431583 0.43597046] / [0.21912035 0.22233568 0.2661024]
Can I adopt these results for sp_coord
and sp_intensity
directly or should I re-calculate for each image superpixel features?
from graph_attention_pool.
Related Issues (5)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from graph_attention_pool.