Comments (5)
Hi,
This C++ code applies a CRF on a fully connected graph, it is not required that it is an image.
That being said the way the graph is defined, the function that calculates the potentials between the nodes, is specific to images in the sense that it uses distance in space as well as the channel values.
This can be changed by changing the functions that computes this affinity compute_kernel to make the rest of the code applicable to the new type of graph.
Best,
Miguel
from permutohedral_lattice.
Hi,
This C++ code applies a CRF on a fully connected graph, it is not required that it is an image.
That being said the way the graph is defined, the function that calculates the potentials between the nodes, is specific to images in the sense that it uses distance in space as well as the channel values.
This can be changed by changing the functions that computes this affinity compute_kernel to make the rest of the code applicable to the new type of graph.Best,
Miguel
Hi Miguel,
Thanks for your reply. I have changed the compute_kernel and made it work according to your suggestions. However, when I trained my model with the CRFasRNN as the top layer, the loss sometime has been computed as "nan", which has never happened on the model without the CRFasRNN layer. Additionally, the βnanβ-loss seems to be sensitive to theta_alpha, theta_beta, theta_gamma according to my experiments. Larger these hyper parameters made βnanβ-loss more likely. In fact, I don't know for what.
Do you have any idea to solve this problem?
The training records are shown as following:
loss: 0.233059, ACC: 0.946615, mIoU:0.660923
loss: 0.316464, ACC: 0.921094, mIoU:0.596161
loss: nan, ACC: 0.935156, mIoU:0.640983
loss: 0.254759, ACC: 0.936888, mIoU:0.649814
loss: nan, ACC: 0.937070, mIoU:0.648452
loss: 0.278317, ACC: 0.938737, mIoU:0.636103
loss: 0.237098, ACC: 0.947292, mIoU:0.688795
loss: 0.330383, ACC: 0.898424, mIoU:0.620089
loss: 0.229100, ACC: 0.942578, mIoU:0.679973
Best,
Haifeng
from permutohedral_lattice.
It's very sensitive to those hyper-parameters, you need to see how they relate to the new way you are computing the kernel - "affinity between nodes" to see what makes sense. On one side you will get nan-loss on the other you might get that the layer ins't doing anything.
from permutohedral_lattice.
Hi Miguel,
I have gone over the workflow. Now, maybe, I have found the reasons for the nan-loss.
The first is a gradient explosion in the RNN module (in the crf_rnn_layer.py). Due to there is not any activate function for the convolution operation and compatibility transform to restrain the gradient, the gradient explosion is easy to happen with a large iteration. So, can the RNN module be changed into the mode of LSTM to overcome the gradient explosion? or any other idea?
The second one is about the module.lattice_filter. Could you let me know if this function will output nan-value in some cases? e.g., divided by zero, etc.
Kind Regards,
Haifeng
from permutohedral_lattice.
It is not possible to replace the RNN for an LSTM as that is not the point. The RNN is there as an unrolling of the mean-field approximation algorithm for CRF estimation so it doesn't make sense to change it. I think module.lattice_filter will keep working regardless of the input being nan or infinity but you can check yourself by directly feeding tensors into this module. The way to address the issue of exploding gradients is to control the theta hyperparameters to reasonable values. Unfortunately this is hard and possibly application specific.
from permutohedral_lattice.
Related Issues (20)
- About the build type HOT 2
- one issue when compiling HOT 11
- SPATIAL_DIMS, INPUT_CHANNELS and REFERENCE_CHANNELS setting HOT 3
- Kernel size HOT 1
- The purpose of the reverse in the blur stage HOT 2
- lattice_filter.so HOT 1
- about Batch_size HOT 1
- Error after compilation HOT 3
- Pytorch? HOT 2
- 3D pointcloud HOT 3
- About Speed acceleration HOT 1
- Tensorflow version HOT 1
- Possibly incorrect number of blocks/blockSize? HOT 1
- About kernel normalization HOT 1
- cuda_runtime.h: No such file or directory HOT 1
- Build.sh runs successfully, but no ./build_dir is created HOT 1
- Pytorch version HOT 1
- Questions about REFERENCE_CHANNELS
- Permutohedral Lattice for 3D pointcloud HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from permutohedral_lattice.