nshaud / deephyperx Goto Github PK
View Code? Open in Web Editor NEWDeep learning toolbox based on PyTorch for hyperspectral data classification.
License: Other
Deep learning toolbox based on PyTorch for hyperspectral data classification.
License: Other
Add extra options for hyperparameters in addition to --lr
Sorry, I`m bothering you again.
I meeted a problem again:
1. I trained some model and the result is bad
2. In the final Confusion matrix ,there appeared a lot of elements in Undefined
column.
Confusion matrix :
[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 568 0 0 0 0 0 0 0 0 1 2 0 0 0 0]
[ 66 0 1 265 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 5 0 0 0 90 0 0 0 0 0 0 0 0 0 0 0 0]
[ 27 0 0 0 0 166 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 292 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 11 0 0 0 0 0 0 0 0 0]
[ 11 0 0 0 0 0 0 0 180 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0]
[ 31 0 0 0 0 0 0 0 0 0 358 0 0 0 0 0 0]
[ 36 0 0 0 0 0 0 0 0 0 0 946 0 0 0 0 0]
[ 14 0 0 0 0 0 0 0 0 0 0 0 223 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 82 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 506 0 0]
[ 50 0 0 0 0 0 0 0 0 0 0 0 0 0 0 105 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 35]]---
I feel this is not reasonable.I think that it makes the model to influence the final bad result
Is there any way to solve this problem?
Maybe out of scope for this toolbox.
The toolbox currently work with the assumption that the models are supervised. Working with unsupervised models (e.g. autoencoders) could be helpful.
Add better logging using Python built-in module https://docs.python.org//3/howto/logging.html
Hi,
Anyone know how to define the training and test set?
I have the GT defined in a mat file and I put the path in but it comes back with the following error:
python main.py --model nn --dataset Selene1TestX --train_set C:\Users\bbop1\hsi-toolbox-master\DeepHyperX\Datasets\Selene1TrainX\Sub1TargetMapPyTrainX.mat --cuda 0
Setting up a new session...
Image has dimensions 1250x1596 and 134 channels
Traceback (most recent call last):
File "main.py", line 275, in
test_gt[(train_gt > 0)[:w,:h]] = 0
TypeError: '>' not supported between instances of 'dict' and 'int'
I have a feeling the train gt needs to be defined as a dictionary. Has anyone done this?
Cheers,
Bop
Hello,
I tried running the container on both Docker and Singularity; however, I am getting this error "/bin/sh: 0: Can't open start.sh". Is there a way to fix this?
This is the command I ran on Docker docker run -p 9999:8097 -ti --rm -v
pwd:/workspace/DeepHyperX/ registry.gitlab.inria.fr/naudeber/deephyperx:rc2
and on Singularity singularity run ./deephyperxtest_latest.sif
.
Thank you!
sklearn.metrics
everywhere needed (especially in validation)build_dataset
functionmain
functionval
and test
functionSequential
API for simple modelstorchvision.transforms
(see #33)Describe the bug
when i run this demo,the result of confusion matrix is not correct, cm[0,0] always equals to zero. so i check the parameters of weight, 'weights': tensor([0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,])
the weights[0] is always 0.
the function of get_model instantiate the weights.
weights = torch.ones(n_classes)
weights[torch.LongTensor(kwargs["ignored_labels"])] = 0.0
i think it maybe casued by the ignored_labels=[0]
so, i replace ignored_labels=[] instead of ignored_labels=[0]
but it don't work
when i delete or comment out weights[torch.LongTensor(kwargs["ignored_labels"])] = 0.0
the initial values weight, 'weights': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,])
when , i replace ignored_labels=[2] instead of ignored_labels=[0]
the values weight, 'weights': tensor([0., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,])
anywhere,weights[0] always equals to zero.
To Reproduce
Steps to reproduce the behavior (e.g. the command that you used).
Expected behavior
A clear and concise description of what you expected to happen.
Desktop (please complete the following information):
In models.py
, There is already a process of training and testing.
What is the purpose of the inference.py
?
@nshaud
e.g. similar to https://github.com/rvalavi/blockCV
how to apply data augmentation?
Current approach to class balancing is to use inverse median frequency loss reweighting.
Other options could be:
Resample the dataset (e.g. upsample minority classes or downsamples majority classes)
torchvision
defines Transforms objects to apply data augmentation and other transformations to data.
We could and should define our own custom Transforms.
Pros:
Cons:
torchvision
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior (e.g. the command that you used).
Expected behavior
A clear and concise description of what you expected to happen.
Desktop (please complete the following information):
I'm trying to train a model with multiple images at once.
It seems I need to change the DATASETS_CONFIG.
If anyone has any insight into what the method is, your thoughts would be much appreciated.
Traceback (most recent call last): File "main.py", line 312, in <module> display=viz) File "C:\Users\yang6\PycharmProjects\DeepHyperX\models.py", line 1059, in train save_model(net, camel_to_snake(str(net.__class__.__name__)), data_loader.dataset.name, epoch=e, metric=abs(metric)) File "C:\Users\yang6\PycharmProjects\DeepHyperX\models.py", line 1068, in save_model torch.save(model.state_dict(), model_dir + filename + '.pth') File "C:\Users\yang6\Anaconda3\envs\DeepHyperX\lib\site-packages\torch\serialization.py", line 260, in save return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) File "C:\Users\yang6\Anaconda3\envs\DeepHyperX\lib\site-packages\torch\serialization.py", line 183, in _with_file_like f = open(f, mode) **OSError: [Errno 22] Invalid argument: './checkpoints/hamida_et_al/PaviaU/2020-07-03 10:52:46.094626_epoch2_0.78.pth'
**
I am very confused about this error. How to solve it?
Would be nice to make HCube inferencw faster using parallel model deploy into 2+ GPUs. Any options ?
Migrate visualization from Visdom to Tensorboard
Hi, thank you very much for such a great tool.
I was trying like to grab a Docker image but seems that it doesn't exist. Would you please upload it once again so I could copy the image?
(base) alexanch@alexanch:~/docker$ docker pull registry.gitlab.inria.fr/naudeber/deephyperx:preview
Error response from daemon: manifest for registry.gitlab.inria.fr/naudeber/deephyperx:preview not found
Describe the bug
I am encountering an issue with the prediction versus test_gt (labels border are eliminated) for the 'Sharma' and 'Chen' models in my project. The predictions seem to be inconsistent or incorrect compared to the ground truth labels.
Expected behavior
I expect the model to generate a full prediction image that includes the border labels, accurately capturing the entire image.
Desktop :
I would like to know how you configured these files. Thank you very much
Can we set the training sample ( such as 500samples each class) in this code?
because of I am interesting hsi classification using gan.
Since some model have BN,dropout layers, why dont use model.eval() in the val function?
am really confused, hope someone can help me. thx in advance.
Hi!
I was using this framework (thank you), but I cannot understand if the inverse median frequency weights are called at all? For me it seems they are created after constructing the loss function and thus never passed anywhere. Is this assumption wrong on my part? Where do they come into play? I assumed the weights would have been passed to the get_model function through the hyperparameters, then used to construct the loss function and not be overwritten by the new initialization of the weight vector. Thank you!
The current spatially disjoint train/test split divides the image in 2 for each class. However there might be spatial correlations between the pixels for those regions and this approach is not well-suited to repeated runs or cross-validation anyway. A more robust way to perform a spatially disjoint split is to extract random blocks (i.e. windows larger than the model's patch size) with constraints on class percentages.
See e.g. BlockCV that does this kind of thing in R.
can it run my own data with 3D-CNN?
please teach me.
Hello,
Can anyone teach me how to use the disjoint
mode correctly please?
I use the below command but the code cannot be run.
!python main.py --model SVM --dataset IndianPines --sampling_mode disjoint --cuda 0
I also tried to find if there were training and testing split samples on the GRSS DASE Website to download. However, there are only testing samples to download.
Here is the environment I used:
Attribute of input is error when summary in main.py is running.
When I go to 'python main.py --model conv3d --dataset IndianPines --training_sample 0.05 --cuda 0' , I see error 'AttributeError: 'builtin_function_or_method' object has no attribute 'size' '.Error code:'summary(model.to(hyperparams["device"]), input.size()[1:])'
Conv3d model is added by myself.
Hi, I'm very thanks for your great work. Here, I have a question about the model "sharma" to ask for your help.
I can't understand the input "x" of the network,
why "'" x = torch.zeros( (1, 1, self.input_channels, self.patch_size, self.patch_size))"'"?
what is """b, t, c, w, h = x.size()"""?
I am looking for your apply, thanks!
Hello, Recently,I miss a question .I want to omit some label_values in My Training set .
Please tell me how to set ignored_labels
I have suffered from loading failure before I examine the name of the data.
Now code runs after
#img = open_file(folder + 'Salinas.mat')['Salinas_corrected']
img = open_file(folder + 'Salinas_corrected.mat')['salinas_corrected']
#gt = open_file(folder + 'Salinas_gt.mat')['Salinas_gt']
gt = open_file(folder + 'Salinas_gt.mat')['salinas_gt']
Hi, I have tried to run your code with "hu" model and sampling_mode "disjoint". However, I got a worse result than the result in Table 3 of your paper. Is there any specific setting for the result of Table 3? Thank you very much.
Currently the dataset is normalized into (0,1). This is mostly fine but we should be able to use other normalizations or even no normalization at all.
TODO:
Thanks for your great work!
I run the code and set "--run 10", but the results of each run fluctuate wildly.
I adjusted lr, batchsize ..., but it didn't work.
Could you give me some advice? thank you!
HI ~
Thanks for your great work, Recently I am trying to reproduce the experiment on paper. But I found there are no accurate experiment settings in the part of the experiment. I am wondering if you can give me some suggestions on the initialization of the parameters. Such as the learning_rate、training_sample etc.
Here is the result by using the command python main.py --model hamida --dataset IndianPines --training_sample 0.1 --cuda 0
.
I found its accuracy has a margin comparing the result on paper.
Looking forward to your early reply ~ Thanks for your consideration.
Hi, When I run 'python main.py --model hu --dataset IndianPines --training_sample 0.1 --cuda 0', the loss is over 2.
And it's useless to change the training_sample, because it's still over 1.
How can I solve it?
Would you please give me the best train parameters?
Thank you very much.
Currently, the torch DataLoader uses blocking data loading. Although loading is very fast (we store the NumPy arrays in-memory), transfer to GPU and data augmentation (which is done on CPU) can slow things done.
Using workers > 0 would make data loading asynchronous and workers > 1 could increase speed somewhat.
TODO:
Network :
Traceback (most recent call last):
File "main.py", line 301, in
summary(model.to(hyperparams['device']), input.size()[1:], device=hyperparams['device'])
File "/home/xj/anaconda3/lib/python3.6/site-packages/torchsummary/torchsummary.py", line 44, in summary
device = device.lower()
AttributeError: 'torch.device' object has no attribute 'lower'
Hi,
I am testing Li's 3D CNN network on IndianPines dataset. In table III in disjoint mode, you have an accuracy of 75%. I can't reach more than 65%.
Thank you :)
Had trouble doing pip install torch using the requirements.txt-- seems to be a Windows issue (pytorch/pytorch#29395)
Might want to make note of this? Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.