Giter VIP home page Giter VIP logo

menpofit's Introduction

menpo

BSD License Python 2.7 Support Python 3.4 Support Python 3.5 Support

menpofit - A deformable modelling toolkit

The Menpo Project package for state-of-the-art 2D deformable modelling techniques. Currently, the techniques that have been implemented include:

Affine Image Alignment

  • Lucas-Kanade Image Alignment
    • Optimization algorithms: Forward Additive, Forward/Inverse Compositional
    • Residuals: SSD, Fourier SSD, ECC, Gradient Correlation, Gradient Images

Deformable Image Alignment

  • Active Template Model
    • Model variants: Holistic, Patch-based, Masked, Linear, Linear Masked
    • Optimization algorithm: Lucas-Kanade Gradient Descent (Forward/Inverse Compositional)

Landmark Localization

  • Active Appearance Model
    • Model variants: Holistic, Patch-based, Masked, Linear, Linear Masked
    • Optimization algorithms: Lucas-Kanade Gradient Descent (Alternating, Modified Alternating, Project Out, Simultaneous, Wiberg), Casaded-Regression
  • Active Pictorial Structures
    • Model variant: Generative
    • Optimization algorithm: Weighted Gauss-Newton Optimisation with fixed Jacobian and Hessian
  • Constrained Local Model
    • Active Shape Models
    • Regularized Landmark Mean-Shift
  • Unified Active Appearance Model and Constrained Local Model
    • Alternating/Project Out Regularized Landmark Mean-Shift
  • Ensemble of Regression Trees
    • [provided by DLib]
  • Supervised Descent Method
    • Model variants: Non Parametric, Parametric Shape, Parametric Appearance, Fully Parametric

Installation

Here in the Menpo team, we are firm believers in making installation as simple as possible. Unfortunately, we are a complex project that relies on satisfying a number of complex 3rd party library dependencies. The default Python packing environment does not make this an easy task. Therefore, we evangelise the use of the conda ecosystem, provided by Anaconda. In order to make things as simple as possible, we suggest that you use conda too! To try and persuade you, go to the Menpo website to find installation instructions for all major platforms.

Documentation

See our documentation on ReadTheDocs

Pretrained Models

Any pretrained models are provided under the assumption that they are used only for academic purposes and may not be used for commercial applications. Please see the license of the 300W project - upon which our pretrained models are trained.

Specifically, the pretrained models in menpofit.aam.pretrained may only be used for academic purposes.

menpofit's People

Contributors

doc-e-brown avatar georgesterpu avatar grigorisg9gr avatar jabooth avatar mkutny avatar mlamarre avatar nontas avatar patricksnape avatar trigeorgis avatar yuxiang-zhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

menpofit's Issues

Save Fitter

Hello friends!

I was wondering if there's a means to save a given fitter after training it so on next load of the program, I don't have to retrain the fitter. Was looking through fitter.py but didn't find any leads. Was wondering if this is possible/if I could make this into a feature request.

Thanks a bunch & hope you have a great day!

Linear AAM/ATM

I'm not sure that the reference frame we are building is correct for a LinearAAM/ATM. I feel like the reference frame should be scaling (less to more pixels) as well as the images. But at the moment, this is not what is happening.

Rank parameter of adaptive AAMs

Migrating issue from Menpo 312.

Papandreou defines a rank parameter in his paper for the adaptive AAMs, which affects both the pre-computation and fitting steps. We should think whether it needs extra work to be incorporated or if this is already covered by the n_appearance parameter.

d_dx on PWA is only valid when points == source

migrated from menpo/menpo#331.

There is an undocumented restriction on the domain of the d_dx function on DifferentiablePiecewiseAffine - namely that it only works when the input points are equal to the PWA's source points. This is a little tricky to account for correctly.

Naming of fitting methods

At the moment, we have a single fit method that takes an initial shape. I think we should have two methods, one from an initial shape and one from a bounding box.

How should we implement the bounding box method? Should it just be overridden per method? What if I train an SDM that is designed for tracking, and updates from a very close initial shape? Is the bounding box method still valid? Or do we just let the user look after this?

What should the methods be called?

fit_from_initial_shape
fit_is
fit_initial_shape

fit_from_bounding_box
fit_bb
fit_from_bb

OutOfMask Sampling

When training models with MaskedImages, OutOfMaskSampleErrors may be thrown. Therefore, we should either warn about training with MaskedImage, or implicitly convert to Image using as_unmasked(copy=False).

missing warnings import

The file menpofit/clm/expert/base.py is missing an import warnings statement. Line 28 calls warnings.warn but there is not import statement in the file and an error is produced if some x axis have zero variance.


@ndfeature
def probability_map(x, axes=(-2, -1)):
    r"""
    Generates the probability MAP of the image.
    Parameters
    ----------
    x : `menpo.image.Image` or subclass or `ndarray`
        The input image.
    axes : ``None`` or `int` or `tuple` of `int`, optional
        Axes along which the normalization is performed.
    Returns
    -------
    probability_map : `menpo.image.Image` or ``(C, X, Y, ..., Z)`` `ndarray`
        The probability MAP of the image.
    """
    x -= np.min(x, axis=axes, keepdims=True)
    total = np.sum(x, axis=axes, keepdims=True)
    nonzero = total > 0
    if np.any(~nonzero):
        warnings.warn('some of x axes have 0 variance - uniform probability '
                      'maps are used them.')
        x[nonzero] /= total[nonzero]
        x[~nonzero] = 1 / np.prod(axes)
    else:
        x /= total
return x

I can fix this easily enough via a pull request if it helps.

Constraining landmarks

When you heavily downscale an image, you may end up with landmarks that fall off the edge of the image. This is particularly important during training, to try and choose the correct out of downscale.

At the moment, we do not warn about this, but I believe we should.

Make feature parameter naming consistent

We need to make sure that the parameters for our algorithms are consistent. At the moment, we have 3 different kinds of features:

  1. Holistic Features: Applied to the whole image. These are currently often called features in menpofit. These are useful for methods that require holistic features (such as AAMs), extract patches on a static image (MaskedAAM) or want pre-processing such as normalization.
  2. Patch Features: These act the same as holistic features, but are applied per patch. This is your standard SDM feature. These are applied after holistic features.
  3. Normalization Features: These are applied after the other two features, and are applied to set a of patches as a whole. For example, these are very important for normalizing the variance of a set of patches for CLMs.

We need to make these concepts clean and consistent and named appropriately across the methods. At the moment, I think we support:

  • HolisticAAM, MaskedAAM - Holistic Features
  • PatchAAM, SDM - Holistic Features, Patch Features
  • CLM - Holistic Features, Normalization Features

What else should we support? Do CLMs need Patch features? Do PatchAAMs need Normalization features?

Possible SDM training improvements

Having looked at other people's implementations of SDMs, we could try the following:

  • Customize the 'perturbation' generation allowing things other than bounding boxes. For example, dlib uses a linear combination of one or more shapes from the training set rather than just the ground truth.
  • Add more regressors. For example, CFSS uses an averaged linear regressor computed from a subset of the data.
  • Generate perturbations at each cascade. Rather than relying on the convergence properties of the algorithm, we would regenerate the perturbations from the current standard deviation of the errors at each cascade to ensure more varied learning for the regressor.

Error types should be improved

The names of the error types should be improved (what does me_norm mean?) - and probably we should have a generic error that allows you to normalize the points in the manner of your choosing e.g. the interocular distances.

Also - to explain the title change, the RMSE norm is correct, but it is perhaps a strange error metric used by things like Kaggle that takes the whole difference matrix in to account instead of point to point.

MenpoFitting human foot toe is acting strangely

Hi All,

I am fairly new to menpo and menpofit. I aim to detect and fit human toe in an image. So far I have annotated 20 toe images by hand and trained ANN using them. Now I have 10 test images and I want to fit the toe using bounding box approach. But it is giving me really strange results. I can share the notebook and sample dataset, if need be

Any help will be greatly appreciated

Trying out this project, I am of the impression that this project is primarily focused on fitting human face. Is my assumption correct? Any help in this regard will be greatly appreciated

noisy_shape_from_shape rotation parameter

Is the rotation parameter of noisy_shape_from_shape correct? At the moment it controls if the initial AlignmentSimilarity accounts for rotation or not. Then, it will generate an initial shape that does perturb the rotation. I feel like the rotation parameter should control whether or not the noisy shape contains rotation or not. This seems to be what is commonly done in the face alignment literature - only generate perturbations that contain scale and translation variation.

@jalabort thoughts?

Question: Fitting a 2D face image onto a 3D model

Is it possible to fit a 2D face image or a custom 3D face model without morphing the 3D model using face keypoints?

The process would be given a 2D face image and a custom 3D face mesh
1 - detect face landmarks on 2D image
2 - identify corresponding face landmarks on 3D face mesh (land marks can be pre-annotated)
3 - deform/transform the 2D face image to aling the land marks location
4 - use the transformed image as texture for 3D face
5 - project the 3D face to 2D

While doing this retain realism and expressions of the 2D face?

Resetting fitter states if references to their AAMs/CLMs are changed?

Shiyang run onto a bit of weird problem today. He has an algorithm that given an AAM produces a new AAM whose shape/appearance means and components are a modified version of those of the original one. His idea was to create a Fitter around the first AAM and then simply replace the AAM with the AAM output of his algorithm (fitter.aam = new_aam). He was expecting the fitter to work as if it was created using the new AAM which did not hapend due to the fitter having some inner state that was computed on construction. While this is not a massive problem (since he can always build a new fitter...) I can see why he though that would work... Since fitter.aam is public, maybe we should put a setter on it so that fitter._set_up is automatically called if the aam is reset... Sensible?

the AAm problems

when i use the doc in http://www.menpo.org/menpofit/aam.html
`%matplotlib inline

from menpowidgets import visualize_images

visualize_images(training_images)`
i get something wrong like this

ValueError Traceback (most recent call last)
in ()
17 get_ipython().magic('matplotlib inline')
18 from menpowidgets import visualize_images
---> 19 visualize_images(training_images)

D:\ProgramFiles\Anaconda\envs\test\lib\site-packages\menpowidgets\base.py in visualize_images(images, figure_size, browser_style, custom_info_callback)
586 n_channels=images[0].n_channels,
587 image_is_masked=isinstance(images[0], MaskedImage),
--> 588 render_function=render_function)
589 landmark_options_wid = LandmarkOptionsWidget(
590 group_keys=groups_keys, labels_keys=labels_keys,

D:\ProgramFiles\Anaconda\envs\test\lib\site-packages\menpowidgets\options.py in init(self, n_channels, image_is_masked, render_function, style)
1574
1575 # Set values
-> 1576 self.set_widget_state(n_channels, image_is_masked, allow_callback=False)
1577
1578 # Set slider update

D:\ProgramFiles\Anaconda\envs\test\lib\site-packages\menpowidgets\options.py in set_widget_state(self, n_channels, image_is_masked, allow_callback)
1856 render_function = self._render_function
1857 self.remove_render_function()
-> 1858 self.remove_callbacks()
1859
1860 # Assign properties

D:\ProgramFiles\Anaconda\envs\test\lib\site-packages\menpowidgets\options.py in remove_callbacks(self)
1627 self._save_glyph, names='selected_values', type='change')
1628 self.cmap_select.unobserve(
-> 1629 self._save_options, names='value', type='change')
1630 self.alpha_slider.unobserve(
1631 self._save_options, names='value', type='change')

D:\ProgramFiles\Anaconda\envs\test\lib\site-packages\traitlets\traitlets.py in unobserve(self, handler, names, type)
1284 names = parse_notifier_name(names)
1285 for n in names:
-> 1286 self._remove_notifiers(handler, n, type)
1287
1288 def unobserve_all(self, name=All):

D:\ProgramFiles\Anaconda\envs\test\lib\site-packages\traitlets\traitlets.py in _remove_notifiers(self, handler, name, type)
1194 del self._trait_notifiers[name][type]
1195 else:
-> 1196 self._trait_notifiers[name][type].remove(handler)
1197 except KeyError:
1198 pass

ValueError: list.remove(x): x not in list

how can i fix this?thanks very much.

The possibility of AAM fitting refer previous successful fit result

Hi, AAM fitting for facial landmark works very well in most case,
but in some frame, it fails miserably, even the image is almost same, and the bounding box is proper.
Is there any way to let AAM fit refer the previous successful fitting result of previous frame?
Just like dlib tracker for movie analysis.

SDM Variations

Excluding regression methodologies, there are roughly four different kinds of SDM methods:

  1. Features -> Non-parametric shape (68-points)
  2. Features -> Parametric shape (PCAModel / OrthoPDM)
  3. Parametric features (PCAModel) -> Non-parametric shape (68-points)
  4. Parametric features (PCAModel) -> Parametric shape (PCAModel / OrthoPDM)

In short, either the shape or the features can be parametric. At the moment, we only natively support (1), and (4) via SDM and SupervisedAAMFitter respectively. I've managed to hack together something that looks like (2), but it involves a PCAModel per patch, which is probably inefficient.

We should support all of these combinations, via a combination of new SDM fitters, algorithms and possibly a few methods that build callables that represent PCA features.

For example, at the moment we can't really replicate Project-out Cascaded Regression because we need to be able to compute the SDM as in (4) but with a smaller vector feature (SIFT). We can't currently do this because PatchAAM cannot build a model with a vector feature - and nor should it be able to!

So we can keep the SD-AAMs because warping is interesting, but PO-CR should probably be implemented via passing an OrthoPDM and new callable type that marries feature computation with a model (returning weights).

AAM problem??

Hi,From the begining of the train process; I 'm having some problems about ("aam") parameter, When I introduce the "aam" ; program is crashing and getting to be terminated,What should I do?

import menpo.io as mio
path_to_images = '/path/to/lfpw/trainset/'
training_images = mio.import_images(path_to_images, verbose=True)

from menpofit.aam import HolisticAAM
from menpo.feature import igo

=> THİS İS WHERE EXACLY PROBLEM OCCURS
aam = HolisticAAM(training_images, reference_shape=None,
diagonal=150, scales=(0.5, 1.0),
holistic_features=igo, verbose=True)

Allow the normalisation scaling to be customized

As has become apparent on the mailing list, some users have images that contain object that don't necessarily scale well according to the diagonal of the bounding box. For example, a spine is very thin, but has a large bounding box and so the normalisation is not effective.

Therefore, we should allow the user to pass a method that controls how this normalisation is performed.

AAMs problems in Refactor branch

On the refactor branch #67 all AAM fitting algorithms using the LucasKanadeStandardInterface are slightly wrong. At the moment they are performing additive, instead of compositional updates!

We have to options to amend this:

  1. Re-implement the solve_all_map and solve_all_ml methods on the previous interface so that they effectively use the "parameter Jacobian" Jp. This might make the functions _solve_all_map and _solve_all_ml quite complicated...
  2. Go back to using the VComposable interface. This would mean that PDMs need to support the previous interface and that ModelDrivenTransforms and PDMs need to handle priors internally.

Possible to fit with only shape model

I have been looking through the menpofit documentation. And, apparently, all the different fitters combine different models (e.g. for the AAM: shape model, motion model, appearance model). So, they are not only fitting based on the shape, but also on the color? Am I correct in assuming that?

So, my question is: Is there a fitter only working on the shape model, hence only taking the shape into account? Is it possible to work only with shapes, but not with textures?

Many thanks!

Fitting results

At the moment, fitting results work but are not very highly refined. I think we need to nail down a few things:

  1. The concept of a base result that is literally just ground truth and final shape. This will become the equivalent of the SerializableResult.
    • How do we attach these results back to images, yet keep the file size down? Just attach the image? No image? Path? Some sort of ID?
  2. Check the current hierarchy makes sense - do all the fitting results such as the AAM results have all the information we want like appearance parameters etc? Are they all available under sensible names?
  3. Visualization - Can we do more advanced visualization for things like AAM results? Can the widgets just handle this on the fly or do we need multiple visualization methods?

This is also likely important for making MenpoBench nice and consistent with MenpoFit.

Pre-trained Holistic no-op AAM

Hi,

In case you still have the (yet) unshared training scrips,
could you please make available a holistic aam with pixel (no-op) features, trained with the same files as the existing Patch fast_dsift one (but larger diagonal e.g. 150) ?

I know that the patch one performs much better at fitting, I'm just playing around with reconstructions.

Thanks,
George

Training failed with ValueError: could not broadcast input array from shape (14444) into shape (43332)

I followed Menpofit basic tutorial but it threw an error.

  • Environment: Anaconda on pyenv with virtualenv
  • OS: UBUNTU 16.04
  • Python: Python 3.5.6 :: Anaconda, Inc.
  • conda list:
#
# Name                    Version                   Build  Channel
appdirs                   1.4.3            py35h28b3542_0  
apptools                  4.4.0                    py35_0    menpo
asn1crypto                0.24.0                   py35_0  
attrs                     18.2.0           py35h28b3542_0  
automat                   0.7.0                    py35_0  
backcall                  0.1.0                    py35_0  
blas                      1.0                         mkl  
bleach                    2.1.4                    py35_0  
boost                     1.59.0                   py35_0    menpo
bzip2                     1.0.6                h14c3975_5  
ca-certificates           2018.03.07                    0  
certifi                   2018.8.24                py35_1  
cffi                      1.11.5           py35he75722e_1  
configobj                 5.0.6                    py35_1  
constantly                15.1.0           py35h28b3542_0  
cryptography              2.3.1            py35hc365091_0  
cycler                    0.10.0           py35hc4d5149_0  
cyffld2                   0.2.4                    py35_0    menpo
cypico                    0.2.7                    py35_0    menpo
cyrasterize               0.3.2                    py35_0    menpo
Cython                    0.23.5                    <pip>
cyvlfeat                  0.4.6                    py35_0    menpo
decorator                 4.3.0                    py35_0  
dlib                      18.18                    py35_2    menpo
docopt                    0.6.2                    py35_0  
entrypoints               0.2.3                    py35_2  
envisage                  4.5.1                    py35_0    menpo
ffmpeg                    2.7.0                         0    menpo
fftw                      3.3.4                         0    menpo
fontconfig                2.11.1                        6  
freetype                  2.5.5                         2  
glew                      2.0.0                         0    menpo
glfw3                     3.2.1                         0    menpo
gmp                       6.1.2                h6c8ec71_1  
hdf5                      1.10.2               hba1933b_1  
html5lib                  1.0.1                    py35_0  
hyperlink                 18.0.0                   py35_0  
icu                       58.2                 h9c2bf20_1  
idna                      2.7                      py35_0  
incremental               17.5.0                   py35_0  
intel-openmp              2019.0                      117  
ipykernel                 4.9.0                    py35_0  
ipython                   6.5.0                    py35_0  
ipython_genutils          0.2.0            py35hc9e07d0_0  
ipywidgets                6.0.0                    py35_0  
jedi                      0.12.1                   py35_0  
jinja2                    2.10                     py35_0  
jpeg                      9b                   h024ee3a_2  
jsonschema                2.6.0            py35h4395190_0  
jupyter                   1.0.0                    py35_4  
jupyter_client            5.2.3                    py35_0  
jupyter_console           5.2.0            py35h4044a63_1  
jupyter_core              4.4.0                    py35_0  
kiwisolver                1.0.1                     <pip>
libedit                   3.1.20170329         h6b74fdf_2  
libffi                    3.2.1                hd88cf55_4  
libgcc-ng                 8.2.0                hdf63c60_1  
libgfortran-ng            7.3.0                hdf63c60_0  
libpng                    1.6.34               hb9fc6fc_0  
libsodium                 1.0.16               h1bed415_0  
libstdcxx-ng              8.2.0                hdf63c60_1  
libtiff                   4.0.9                he85c1e1_2  
libxml2                   2.9.8                h26e45fe_1  
markupsafe                1.0              py35h14c3975_1  
matplotlib                1.5.0                     <pip>
matplotlib                1.5.1               np111py35_0  
mayavi                    4.5.0                    py35_0    menpo
menpo                     0.8.1                    py35_0    menpo
menpo                     0.8.1                     <pip>
menpo3d                   0.6.0                    py35_0    menpo
menpocli                  0.1.0                    py35_0    menpo
menpodetect               0.5.0                    py35_0    menpo
menpofit                  0.5.0                    py35_0    menpo
menpofit                  0.5.0                     <pip>
menpoproject              2.0                        py_0    menpo
menpowidgets              0.3.0                    py35_0    menpo
metis                     5.1.0                         0    menpo
mistune                   0.8.3            py35h14c3975_1  
mkl                       2019.0                      117  
mock                      2.0.0            py35h70ca42c_0  
nbconvert                 5.3.1                    py35_0  
nbformat                  4.4.0            py35h12e6e07_0  
ncurses                   6.1                  hf484d3e_0  
nose                      1.3.7                    py35_2  
notebook                  5.6.0                    py35_0  
numpy                     1.11.3          py35h1d66e8a_10  
numpy                     1.10.4                    <pip>
numpy-base                1.11.3          py35h81de0dd_10  
olefile                   0.46                     py35_0  
opencv3                   3.1.0                    py35_0    menpo
openssl                   1.0.2p               h14c3975_0  
pandas                    0.23.4           py35h04863e7_0  
pandoc                    2.2.3.2                       0  
pandocfilters             1.4.2                    py35_1  
parso                     0.3.1                    py35_0  
pbr                       4.2.0                    py35_0  
pexpect                   4.6.0                    py35_0  
pickleshare               0.7.4            py35hd57304d_0  
Pillow                    4.3.0                     <pip>
pillow                    4.2.1                    py35_0  
pip                       10.0.1                   py35_0  
pip                       18.0                      <pip>
prometheus_client         0.3.1            py35h28b3542_0  
prompt_toolkit            1.0.15           py35hc09de7a_0  
ptyprocess                0.6.0                    py35_0  
pyasn1                    0.4.4            py35h28b3542_0  
pyasn1-modules            0.2.2                    py35_0  
pycparser                 2.18                     py35_1  
pyface                    5.1.0                    py35_0    menpo
pygments                  2.2.0            py35h0f41973_0  
pyopenssl                 18.0.0                   py35_0  
pyparsing                 2.2.0                    py35_1  
pyqt                      4.11.4                   py35_4  
python                    3.5.6                hc3d631a_0  
python-dateutil           2.7.3                    py35_0  
pytz                      2018.5                   py35_0  
pyzmq                     17.1.2           py35h14c3975_0  
qt                        4.8.7                         3  
qtconsole                 4.3.1            py35h4626a06_0  
readline                  7.0                  h7b6447c_5  
scikit-learn              0.19.1           py35hbf1f462_0  
scikit-learn              0.17.1                    <pip>
scikit-sparse             0.3.1                    py35_0    menpo
scipy                     0.16.1                    <pip>
scipy                     0.19.1           py35ha8f041b_3  
send2trash                1.5.0                    py35_0  
service_identity          17.0.0           py35h28b3542_0  
setuptools                40.2.0                   py35_0  
simplegeneric             0.8.1                    py35_2  
sip                       4.18                     py35_0  
six                       1.11.0                   py35_1  
sqlite                    3.24.0               h84994c4_0  
suitesparse               4.4.1                         0    menpo
system                    5.8                           2  
terminado                 0.8.1                    py35_1  
testpath                  0.3.1            py35had42eaf_0  
tk                        8.6.8                hbc83047_0  
tornado                   5.1              py35h14c3975_0  
traitlets                 4.3.2            py35ha522a97_0  
traits                    4.5.0                    py35_0    menpo
traitsui                  5.1.0                    py35_0    menpo
twisted                   18.7.0           py35h14c3975_1  
vlfeat                    0.9.20                        1    menpo
vtk                       7.0.0                    py35_0    menpo
wcwidth                   0.1.7            py35hcd08066_0  
webencodings              0.5.1                    py35_1  
wheel                     0.31.1                   py35_0  
widgetsnbextension        3.4.1                    py35_0  
xz                        5.2.4                h14c3975_4  
zeromq                    4.2.5                hf484d3e_1  
zlib                      1.2.11               ha838bed_2  
zope                      1.0                      py35_1  
zope.interface            4.5.0            py35h14c3975_0  

I also tried following things but ended to the same result.

  • without pyenv and virtualenv on Ubuntu,
  • downgrade to menpofit==0.3.0
  • upgrade numpy, scipy and matplotlib to the latest by pip
  • change dataset to IBUG, XM2VTS found on IBUG website
  • tried PatchAAM

Executed Jupyter notebook is below:

Input 1

import menpo.io as mio
path_to_images = '/home/myname/work/data/LFPW/trainset/'
training_images = mio.import_images(path_to_images, verbose=True)

Output 1
Found 811 assets, index the returned LazyList to import.

Input 2

from menpofit.aam import HolisticAAM
from menpofit.aam import PatchAAM
from menpo.feature import igo

aam = HolisticAAM(training_images, reference_shape=None,
                  diagonal=150, scales=(0.9, 1.0),
                  holistic_features=igo, verbose=True,
                  batch_size=16
                 )
print(aam)

Output 2

- Computing reference shape                                                     Computing batch 0
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Computing feature space: [          ] 6% (1/16) - 00:00:00 remaining

/home/myname/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpofit/aam/base.py:164: MenpoFitBuilderWarning: No reference shape was provided. The mean of the first batch will be the reference shape. If the batch mean is not representative of the true mean, this may cause issues.
  MenpoFitBuilderWarning)

  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Building shape model                                               

/home/myname/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpofit/builder.py:338: MenpoFitModelBuilderWarning: The reference shape passed is not a TriMesh or subclass and therefore the reference frame (mask) will be calculated via a Delaunay triangulation. This may cause small triangles and thus suboptimal warps.
  MenpoFitModelBuilderWarning)
/home/myname/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpo/image/boolean.py:711: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  copy.pixels[slices].flat = point_in_pointcloud(pointcloud, indices)

  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 1
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ning
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 2
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 3
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 4
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 5
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ning
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 6
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 7
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 8
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 9
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Doneding appearance model                                          ing
  - Scale 1: Doneding appearance model                                          
                                                              Computing batch 10
- Building modelsges size: [==========] 100% (16/16) - done.                    
  - Scale 0: Building appearance model                                          ning

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-40-c5f48712ecae> in <module>()
      6                   diagonal=150, scales=(0.9, 1.0),
      7                   holistic_features=igo, verbose=True,
----> 8                   batch_size=16
      9                  )
     10 print(aam)

~/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpofit/aam/base.py in __init__(self, images, group, holistic_features, reference_shape, diagonal, scales, transform, shape_model_cls, max_shape_components, max_appearance_components, verbose, batch_size)
    137         # Train AAM
    138         self._train(images, increment=False, group=group, verbose=verbose,
--> 139                     batch_size=batch_size)
    140 
    141     def _train(self, images, increment=False, group=None,

~/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpofit/aam/base.py in _train(self, images, increment, group, shape_forgetting_factor, appearance_forgetting_factor, verbose, batch_size)
    181                 shape_forgetting_factor=shape_forgetting_factor,
    182                 appearance_forgetting_factor=appearance_forgetting_factor,
--> 183                 verbose=verbose)
    184 
    185     def _train_batch(self, image_batch, increment=False, group=None,

~/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpofit/aam/base.py in _train_batch(self, image_batch, increment, group, verbose, shape_forgetting_factor, appearance_forgetting_factor)
    267                 self.appearance_models[j].increment(
    268                     warped_images,
--> 269                     forgetting_factor=appearance_forgetting_factor)
    270                 # trim appearance model if required
    271                 if self.max_appearance_components[j] is not None:

~/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpo/model/pca.py in increment(self, samples, n_samples, forgetting_factor, verbose)
   1426         """
   1427         # build a data matrix from the new samples
-> 1428         data = as_matrix(samples, length=n_samples, verbose=verbose)
   1429         n_new_samples = data.shape[0]
   1430         PCAVectorModel.increment(self, data, n_samples=n_new_samples,

~/.pyenv/versions/anaconda3-5.2.0/envs/menpo/lib/python3.5/site-packages/menpo/math/linalg.py in as_matrix(vectorizables, length, return_template, verbose)
    151     i = 0
    152     for i, sample in enumerate(vectorizables, 1):
--> 153         data[i] = sample.as_vector()
    154 
    155     # we have exhausted the iterable, but did we get enough items?

ValueError: could not broadcast input array from shape (14444) into shape (43332)

PS.

visualize_images(training_images)

also failed with

AttributeError: module 'matplotlib.colors' has no attribute 'to_rgba'

, and it couldn't be addressed, which might also help to find the cause of this issue.

Gaussian blurring

We need to decide whether to add the option of Gaussian blurring in our builders. I ran some indicative experiments with holistic AAMs (features=double_igo, diagonal=180, scales=(1., 0.5, 0.25), scale_shapes=True, scale_features=True, n_shape=[5, 10, 15], n_appearance=100, max_iters=60) and the blurring seems to constantly increase the performance. I used LFPW training and testing sets. See the following tables for details:

1) ModifiedAlternatingInverseCompositional algorithm:

Blurring noise_std 0.02 0.03 0.04 0.05 mean median std
False 0.0 31.2 78.6 92.4 98.7 0.0252 0.0231 0.0096
True 0.0 31.2 81.7 94.6 99.1 0.0246 0.0229 0.0083
False 0.04 30.4 76.3 87.5 92.0 0.0286 0.0230 0.0190
True 0.04 32.1 78.6 90.6 93.8 0.0269 0.0230 0.0150

2) WibergInverseCompositional algorithm:

Blurring noise_std 0.02 0.03 0.04 0.05 mean median std
False 0.0 31.2 78.1 92.4 98.2 0.0253 0.0232 0.0096
True 0.0 31.7 81.2 94.2 98.7 0.0248 0.0231 0.0087
False 0.04 30.4 75.9 87.5 92.0 0.0286 0.0230 0.0188
True 0.04 30.4 79.5 91.5 95.5 0.0265 0.0231 0.0142

However, after mentioning this to @jalabort , it seems that it is not easy to add this option for all builders. The biggest problem seems to be how to determine the value of sigma that would be used in the gaussian_pyramid method.

SDMs are not pickleable

This is due to the features method on the RegressionTrainer class/subclasses. For the NonParameterRegressorTrainer, the features method is passed into the NonParametricRegressor here, which is a bound method. This can't be pickled. Therefore, we need to break that dependency and make that features method a separate function or a Callable.

dlib wrapper error

Hello,

I get this error when using the DlibWrapper with multiprocessing.Pool:

RuntimeError: Pickling of "dlib.shape_predictor_training_options" instances is not enabled (http://www.boost.org/libs/python/doc/v2/pickle.html)

My code is:

with Pool(self._nThreads) as p:
    p.starmap(_process_one_file,
              zip(files, repeat(processor)))

where processor is an instance of a class where one of the members could be a menpofit.dlib.DlibWrapper. Code runs fine when that member is instead a menpofit.aam.LucasKanadeAAMFitter, or when I don't use multiprocessing.Pool at all.

Could you help me find a solution to this problem, while still keeping the benefits of multiple processes ?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.