Giter VIP home page Giter VIP logo

ffcc's Introduction

Fast Fourier Color Constancy Matlab Toolbox

The Fast Fourier Color Constancy (FFCC) Matlab Toolbox includes the following functionalities:

  • Tune() - Cross-validation and parameter tuning.
  • Train() - Training.
  • Visualize() - Visualizing cross-validation or training/test performance.

This code depends on the "minFunc" library from Mark Schmidt: https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html Either add this to the path manually or place it inside the root /ffcc/ directory.

Training & Cross Validation

The following section discusses training and cross validation.

Preparation and Folder Structure

The training folder structure should look similar to the following:

- train_folder/
            + folder_1/
            - folder_2/
                      | 000001.png
                      | 000001.txt
                      ...

The script will parse down the subfolders and look for *.png and *.txt files, which corresponds to the linear thumbnail and the color of global illuminant. This data has been provided for the Gehler-Shi dataset, alongside a script for generating this data "from scratch" (see /ffcc/data/).

Linear Thumbnail

The PNG file is usually a small resolution thumbnail with linear data after black level removed. This is very important since this assumption enables shift invariance. This PNG file needs to match to the format of your imaging pipeline spec.

Global Illuminant

The text file descibes the color of the global illuminant. It looks like this:

0.500249
0.819743
0.278876

The equation to convert between white balance gain and the color of illuminant is as follows:

$$ L = z / |z| $$

where \(z = [1/R_{gain}, 1/G_{gain}, 1/B_{gain}]\).

Project Folder

To allow FFCC Toolbox to support wide range of projects, we separate out the core algorithms from the project specific implementations. The project specific implementations are placed under projects/ folder.

The following scripts all take as input the string of some project name, which must correspond exactly to a folder in projects/, and which must prefix all filenames in the projects/ subfolder.

Tuning and cross-validation

Tune() performs coordinate descent over hyperparameters, and can also be used to sanity-check the current hyperparameter set as a diagnostic. A side effect of tuning/cross-validation is that it will produce a set of error metrics, of the following form:

angular_err.mean = 2.0749
angular_err.median = 1.1279
angular_err.tri = 1.3221
angular_err.b25 = 0.2924
angular_err.w25 = 5.4809
angular_err.q95 = 7.3257
vonmises_nll.mean = -2.3466
vonmises_nll.median = -3.2563
vonmises_nll.tri = -3.2106
vonmises_nll.b25 = -4.6225
vonmises_nll.w25 = 1.6908
vonmises_nll.q95 = 1.8778

The first element of each printed label is a kind of error measure, and the second element is a surface statistic on that error metric. Please see CrossValidate.m and ErrorMetrics.m for detailed descriptions. An angular_err less than 2 degrees is not perceptible, and vonmisses_nll is is measured in negative-nats for an individual training point. If the error is more than 2 degrees, you might consider:

  • Allow Tune() to keep brute-force searching for better parameter settings. Better parameters need to be manually copy-pasted into the *Hyperparams.m file to be used in later training and tuning, as described in Tune(). Tuning may be slow, so consider running it overnight or for several days.
  • Augment and diversify your dataset. Make sure your data is equally presented different lighting conditions, instead of heavily favoring one particular group.

You can read further tuning description in Tune.m or CrossValidate.m

Training

Once Tune() is producing reasonable cross-validation results, use Train() to train a final model which will be written to disk. See Train() for details.

Visualization

Visualize() provides visualizing and testing functinality. Given a project name, It will do all necessary training (or cross-validation) and produce a visualization for each image. If the project uses cross-validation, then each image is evaluated using the model that did not include that image in its training fold. If the project uses training/test splits, then all images are evaluated using the model trained on the training set. If params.TRAINING.DUMP_EXHAUSTIVE_VISUALIZATION is set to true, this script will dump an extensive set of images and files describing the output of the model, which can be useful for debugging. See CrossValidate() for details.

ffcc's People

Contributors

abadams avatar fbleibel-g avatar jonbarron avatar mahmoudnafifi avatar yuntatsai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ffcc's Issues

How to understand the meaning of the conv operations in the function "RenderHistorgramGaussian"?

hi,
@jonbarron, thanks for sharing the codes of the ffcc. I can't understand the meaning of the conv operations in the function "RenderHistorgramGaussian". The code is as follow,

% Threshold the mahalanobis distance at 3 to make a binary mask, which is
% dilated to produce an ellipse with a dot in the center.
mask = mahal_dist <= 3;
prediction = (conv2(double(mask), ones(3,3), 'same') > 0) & ~mask;
prediction = prediction | (conv2(double(mahal_dist == min(mahal_dist(:))), [0, 1, 0; 1, 1, 1; 0, 1, 0], 'same') > 0);

% Optionally create a mask with the ground-truth white point rendered as a dot.
if ~isempty(mu_true)
  D = (us - mu_true(1)).^2 + (vs - mu_true(2)).^2;
  truth = D == min(D(:));
  truth = (conv2(double(truth), [0, 1, 0; 1, 1, 1; 0, 1, 0], 'same') > 0);
  truth = (conv2(double(truth), [0, 1, 0; 1, 1, 1; 0, 1, 0], 'same') > 0);

I want to ask the questions,

  1. What's the meaning of the function conv2(double(mask), ones(3,3), 'same') > 0) ?
  2. I can understand the formula about mahal_dist == min(mahal_dist(:) to get the ellipse of the minmum distance. However, i can't understand why to run the conv operation of the formula conv2(double(mahal_dist == min(mahal_dist(:))), [0, 1, 0; 1, 1, 1; 0, 1, 0], 'same') and why to chose the matrix [0, 1, 0; 1, 1, 1; 0, 1, 0] .
  3. Why are the two same conv operations with conv2(double(truth), [0, 1, 0; 1, 1, 1; 0, 1, 0], 'same') ?

Look forward to your favourable reply !
@jonbarron

A not obvious but important bug about fc-hyper-net(ffcc with meta)

When I reviewed the code in internal/TrainModel.m, line 283 ~ 288, the original codes are as below:

if params.DEEP.WHITEN_FEATURES
      % Unwhiten the first layer according to the whitening transformation, so
      % that is produces the correct output on the unwhitened feature vectors.
      model.W{1} = model.W{1} * whitening_transformation.A;
      model.b{1} = model.W{1} * whitening_transformation.b + model.b{1};
end

Since model.W{1} changed before model.b{1} it will lead a computational mistake to model.b{1}.And this mistake may weaken the model performance.
So I simply change the code order as below:

      model.b{1} = model.W{1} * whitening_transformation.b + model.b{1};
      model.W{1} = model.W{1} * whitening_transformation.A;

And after this change, I got a significant improvement in my experiment.^.^

Gehler-Shi dataset no longer online

Thank you for providing the Gehler-Shi data (your repo may be the last public source of it!). The Gehler-Shi website is no longer providing zip links.

Consult the problems about the function ChannelizeImage() and preconditioner.F_fft * model_precond.F_fft

Hi,
@jonbarron , thanks for sharing the ffcc project.
I am researching the ffcc project and have met two problems as follows.
First, I can not understand why use the MaskedLocalAbsoluteDeviation to compute the im_channels{2}, and i debug the computational process of the im_channel.

function im_channels = ChannelizeImage(im, mask)
% Generate feature images (color and gradient) and combine them into
% different cell channels.

assert(isa(mask, 'logical'));
im_channels = {};

im_channels{1} = cast(bsxfun(@times, double(im), mask), 'like', im);
im_channels{2} = cast(MaskedLocalAbsoluteDeviation(im, mask), 'like', im);

im_channels = ChannelizeImage(im, mask);  %  256*384*3, 256*384*3
......                                   % histrogram feature 
X = Xc ;                                 % 64*64*2
......                                   % get the fourier transformation(F_fft and X_fft) of the model param and X, 
FX_fft = sum(bsxfun(@times, X_fft, F_fft), 3)  

Second, how to understand the computational preconditioner.F_fft * model_precond.F_fft to get the model.F_fft. Why not just use the variable model_precond as model.F_fft. I disconver the values of the variable preconditioner.F_fft are approximate zeros.

% the computational process of the preconditioner.F_fft  
u_variation_fft = abs(fft2([-1; 1]/sqrt(8), X_sz(1), X_sz(2))).^2;
v_variation_fft = abs(fft2([-1, 1]/sqrt(8), X_sz(1), X_sz(2))).^2;
total_variation_fft = u_variation_fft + v_variation_fft;

% A helper function for applying a scale and shift to a stack of images.
apply_scale_shift = @(x, m, b)bsxfun(@plus, bsxfun(@times, x, permute(m(:), [2,3,1])), permute(b(:), [2,3,1]));

@jonbarron,i will be looking forward to your reply!Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.