Giter VIP home page Giter VIP logo

pymvpa's Introduction

PyMVPA -- Multivariate Pattern Analysis in Python PyMVPA travis-ci build status PyMVPA coveralls coverage status Original PyMVPA publication

For information how to install PyMVPA please see doc/source/installation.rst .

Further information and access to binary packages is available from the project website at http://www.pymvpa.org .

pymvpa's People

Contributors

adamatus avatar adswa avatar andycon avatar armaneshaghi avatar bpinsard avatar cameronphchen avatar cgohlke avatar dillonplunkett avatar dinga92 avatar effigies avatar esc avatar feilong avatar geeragh avatar jpellman avatar justinshenk avatar mekman avatar mih avatar miykael avatar mvdoc avatar neurosbh avatar nno avatar otizonaizit avatar psederberg avatar rekadanielweiner avatar satra avatar soletmons avatar swaroopgj avatar timgates42 avatar usmanayubsh avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pymvpa's Issues

installation problem

root@nitish-laptop:~/PyMVPA# python2 setup.py build_ext
running build_ext
running build_src
building extension "mvpa.clfs.libsmlrc.smlrc" sources
building extension "mvpa.clfs.libsvmc._svmc" sources
building data_files sources
customize UnixCCompiler
customize UnixCCompiler using build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
building 'mvpa.clfs.libsmlrc.smlrc' extension
compiling C sources
C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC

compile options: '-I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c'
gcc: mvpa/clfs/libsmlrc/smlr.c
mvpa/clfs/libsmlrc/smlr.c:10:20: error: Python.h: No such file or directory
mvpa/clfs/libsmlrc/smlr.c:13: warning: return type defaults to ‘int’
mvpa/clfs/libsmlrc/smlr.c: In function ‘DL_EXPORT’:
mvpa/clfs/libsmlrc/smlr.c:13: error: expected declaration specifiers before ‘stepwise_regression’
mvpa/clfs/libsmlrc/smlr.c:241: error: expected declaration specifiers before ‘PyMODINIT_FUNC’
mvpa/clfs/libsmlrc/smlr.c:12: error: parameter name omitted
mvpa/clfs/libsmlrc/smlr.c:244: error: expected ‘{’ at end of input
mvpa/clfs/libsmlrc/smlr.c:10:20: error: Python.h: No such file or directory
mvpa/clfs/libsmlrc/smlr.c:13: warning: return type defaults to ‘int’
mvpa/clfs/libsmlrc/smlr.c: In function ‘DL_EXPORT’:
mvpa/clfs/libsmlrc/smlr.c:13: error: expected declaration specifiers before ‘stepwise_regression’
mvpa/clfs/libsmlrc/smlr.c:241: error: expected declaration specifiers before ‘PyMODINIT_FUNC’
mvpa/clfs/libsmlrc/smlr.c:12: error: parameter name omitted
mvpa/clfs/libsmlrc/smlr.c:244: error: expected ‘{’ at end of input
error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c mvpa/clfs/libsmlrc/smlr.c -o build/temp.linux-i686-2.6/mvpa/clfs/libsmlrc/smlr.o" failed with exit status 1

while installing i got this error please help me.

Undetermined problem with classifier reuse?

michael@meiner ~/hacking/pymvpa.dev (git)-[master] % MVPA_SEED=187138923 MVPA_TESTS_QUICK=no MVPA_TESTS_LOWMEM=yes make unittest-nonlabile
I: Running only non labile unittests. None of them should ever fail.
T: MVPA_SEED=187138923
T: Warning -- following test files were found but will not be tested: test_atlases.py
======================================================================
FAIL: test_null_dist_prob (mvpa.tests.test_transerror.ErrorsTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/michael/hacking/pymvpa.dev/mvpa/tests/test_transerror.py", line 251, in test_null_dist_prob
    num_perm)
AssertionError: 
 Single scenario lead to failures of unittest test_null_dist_prob:
  on
    l_clf=<LinSVM on SMLR(lm=1) non-0> :
     9 != 10

BUT

michael@meiner ~/hacking/pymvpa.dev (git)-[master] % MVPA_SEED=187138923 MVPA_TESTS_QUICK=no MVPA_TESTS_LOWMEM=yes make ut-transerror
....
----------------------------------------------------------------------
Ran 10 tests in 17.069s

OK

This could be a problem with reusing this particular feature selection classifier. Or just that the actual error is not visible in the second run, because the effective seed for this test in the first run was different, of course. In any case, there is a bug somewhere.

Add a wrapper for mappers that makes them swap forward and reverse

Rational:

If a dataset gets reverse mapped by a mapper the mapper in that dataset becomes invalid. Instead of stripping that mapper, we could keep it growing with copies of the reverse-operating mapper that have the behavior permanently reversed.

Whether that will scale far beyond a few candidate mappers is yet to be determind.

fails to load from tutorial precomputed datasets

 In [7]: ds = load_tutorial_results('ds_haxby2001')
 ....
 /home/yoh/proj/pymvpa/master/mvpa/base/hdf5.pyc in _hdf_dict_to_obj(hdf, memo, skip)
    283         items = _hdf_list_to_obj(hdf, memo)
    284         items = [i for i in items if not i[0] in skip]

--> 285 return dict(items)
286 else:
287 # legacy files had keys as group names
TypeError: unhashable type: 'numpy.ndarray'

Cause:

*ipdb> print [i[0] for i in items]
 [array('chunks', 
  dtype='|S6'), array('time_indices', 
  dtype='|S12'), array('runtype', 
  dtype='|S7'), array('targets', 
  dtype='|S7'), array('time_coords', 
  dtype='|S11')]

Event-related dataset .reverse1() has hickups

In some conditions it prepends a new dimension (one element) that causes FeatureSliceMapper to created arrays with
(nsamples x nfeatures_per_volume_prior_slicing x nfeature_per_volume_after_slicing)

that is a sideeffect of having to switch from reverse1() to reverse() somewhere in the chain and also to be able to handle multi-dimensional features.

Be clever enough to not puke on datasets without chunks or targets (if not necessary)

Right now I have to explicitly set 'chunks_attr=None' in zscore, if I call it with a dataset that doesn't have chunks. Similar happens with Dataset.summary() for a dataset that doesn't have targets -- although here even setting the attr to None doesn't help.

We should have a more gentle way of dealing with such datasets. However, I'm not sure what that way would be, hence not "fixing" it right away.

Let Splitter ignore the '0' values upon request.

Currently, it will cause problems if a Partitioner is used in conjunction with a Balancer and a Splitter. If some elements are marked as excluded (i.e. '0'), the splitter will nevertheless return this portion. If that happens in TransferMeasure, it might yield unintended behavior.

Ability to read surface data with topographic information

Ability to read surface data with topographic information:

  • Need to read from some surface data format like NIML dataset files from AFNI/SUMA
    and understand the topographic info.
  • May be store the coordinates on surface (and/or 3D) as feature attributes

Implement parallelization support for RepeatedMeasure

The code basically screams for it. Right now it produces a list of results that gets collapsed into a Dataset. Each iteration should be independent of all others -- we'd only need to have copies of the measures, and figure out whether a predictable order of iterations is necessary.

With parallelization in RepeatedMeasure, it becomes even more similar to Searchlight...

Cached kernel when used to with searchlight gives different results

I used cached kernel with searchlight like this:

kernel = LinearSGKernel(normalizer_cls=False)
kernel = CachedKernel(kernel)
clf = sg.SVM(svm_impl='libsvm', C=-1.0, kernel=kernel)
cvterr = CrossValidatedTransferError(TransferError(clf), splitter=NFoldSplitter())
sl = sphere_searchlight(cvterr, radius = 3, postproc = mean_sample())

When I ran it on dataset, it gave results that are different from when I don't use kernel argument in
clf creation.

Create step-by-step tutorial

The documentation should be restructured to provide an easier entrypoint into PyMVPA. The current examples should be transformed into an incremental tutorial and the remaining pieces into a reference manual describing the basic design.

Joint classification and sensitivity analysis without resource wasting

We should have a way to perform a cross-validated classification analysis with an embedded sensitivity analysis that works without training a classifier more than once per fold, while still allowing for assessment of Null distribution probabilities. Right now that is possible, but ugly and close to incomprehensible.

Once implemented this tool should be used to address issue #19, by showing how to do that from within NiPyPE workflows.

convenient interface to provide per-class weighting in SVMs

atm, weights, weights_labels seems to be of no effect for libsvm; for shogun's interface they are missing

we should get weights='auto' to balance automatically depending on the number of samples per class to avoid manual specification

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.