For information how to install PyMVPA please see doc/source/installation.rst .
Further information and access to binary packages is available from the project website at http://www.pymvpa.org .
MultiVariate Pattern Analysis in Python
Home Page: http://www.pymvpa.org
License: Other
For information how to install PyMVPA please see doc/source/installation.rst .
Further information and access to binary packages is available from the project website at http://www.pymvpa.org .
Nevermind
root@nitish-laptop:~/PyMVPA# python2 setup.py build_ext
running build_ext
running build_src
building extension "mvpa.clfs.libsmlrc.smlrc" sources
building extension "mvpa.clfs.libsvmc._svmc" sources
building data_files sources
customize UnixCCompiler
customize UnixCCompiler using build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
building 'mvpa.clfs.libsmlrc.smlrc' extension
compiling C sources
C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC
compile options: '-I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c'
gcc: mvpa/clfs/libsmlrc/smlr.c
mvpa/clfs/libsmlrc/smlr.c:10:20: error: Python.h: No such file or directory
mvpa/clfs/libsmlrc/smlr.c:13: warning: return type defaults to ‘int’
mvpa/clfs/libsmlrc/smlr.c: In function ‘DL_EXPORT’:
mvpa/clfs/libsmlrc/smlr.c:13: error: expected declaration specifiers before ‘stepwise_regression’
mvpa/clfs/libsmlrc/smlr.c:241: error: expected declaration specifiers before ‘PyMODINIT_FUNC’
mvpa/clfs/libsmlrc/smlr.c:12: error: parameter name omitted
mvpa/clfs/libsmlrc/smlr.c:244: error: expected ‘{’ at end of input
mvpa/clfs/libsmlrc/smlr.c:10:20: error: Python.h: No such file or directory
mvpa/clfs/libsmlrc/smlr.c:13: warning: return type defaults to ‘int’
mvpa/clfs/libsmlrc/smlr.c: In function ‘DL_EXPORT’:
mvpa/clfs/libsmlrc/smlr.c:13: error: expected declaration specifiers before ‘stepwise_regression’
mvpa/clfs/libsmlrc/smlr.c:241: error: expected declaration specifiers before ‘PyMODINIT_FUNC’
mvpa/clfs/libsmlrc/smlr.c:12: error: parameter name omitted
mvpa/clfs/libsmlrc/smlr.c:244: error: expected ‘{’ at end of input
error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c mvpa/clfs/libsmlrc/smlr.c -o build/temp.linux-i686-2.6/mvpa/clfs/libsmlrc/smlr.o" failed with exit status 1
while installing i got this error please help me.
Setting transformer=None, combiner=None resolved the issue in
http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2009q4/000806.html
per discussion raised by Jo in the list
michael@meiner ~/hacking/pymvpa.dev (git)-[master] % MVPA_SEED=187138923 MVPA_TESTS_QUICK=no MVPA_TESTS_LOWMEM=yes make unittest-nonlabile
I: Running only non labile unittests. None of them should ever fail.
T: MVPA_SEED=187138923
T: Warning -- following test files were found but will not be tested: test_atlases.py
======================================================================
FAIL: test_null_dist_prob (mvpa.tests.test_transerror.ErrorsTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/michael/hacking/pymvpa.dev/mvpa/tests/test_transerror.py", line 251, in test_null_dist_prob
num_perm)
AssertionError:
Single scenario lead to failures of unittest test_null_dist_prob:
on
l_clf=<LinSVM on SMLR(lm=1) non-0> :
9 != 10
BUT
michael@meiner ~/hacking/pymvpa.dev (git)-[master] % MVPA_SEED=187138923 MVPA_TESTS_QUICK=no MVPA_TESTS_LOWMEM=yes make ut-transerror
....
----------------------------------------------------------------------
Ran 10 tests in 17.069s
OK
This could be a problem with reusing this particular feature selection classifier. Or just that the actual error is not visible in the second run, because the effective seed for this test in the first run was different, of course. In any case, there is a bug somewhere.
Jörg Stadler reports that he found differences in the overlay image rendering between plot_lightbox() and FSLView. Tests were done with pymvpa 0.4.4.
Rational:
If a dataset gets reverse mapped by a mapper the mapper in that dataset becomes invalid. Instead of stripping that mapper, we could keep it growing with copies of the reverse-operating mapper that have the behavior permanently reversed.
Whether that will scale far beyond a few candidate mappers is yet to be determind.
This implementation should not depend on other parts of PyMVPA -- ideally only on NumPy. This would be useful allow software to support data communication with all of PyMVPA, while not having to depend on it.
deprecate perchunk and use chunks
probably even fixup/extend ZscoreMapper to follow PolyDetrendMapper and then craft factory function zscore
In [7]: ds = load_tutorial_results('ds_haxby2001')
....
/home/yoh/proj/pymvpa/master/mvpa/base/hdf5.pyc in _hdf_dict_to_obj(hdf, memo, skip)
283 items = _hdf_list_to_obj(hdf, memo)
284 items = [i for i in items if not i[0] in skip]
--> 285 return dict(items)
286 else:
287 # legacy files had keys as group names
TypeError: unhashable type: 'numpy.ndarray'
Cause:
*ipdb> print [i[0] for i in items]
[array('chunks',
dtype='|S6'), array('time_indices',
dtype='|S12'), array('runtype',
dtype='|S7'), array('targets',
dtype='|S7'), array('time_coords',
dtype='|S11')]
In some conditions it prepends a new dimension (one element) that causes FeatureSliceMapper to created arrays with
(nsamples x nfeatures_per_volume_prior_slicing x nfeature_per_volume_after_slicing)
that is a sideeffect of having to switch from reverse1() to reverse() somewhere in the chain and also to be able to handle multi-dimensional features.
see title.
Right now I have to explicitly set 'chunks_attr=None' in zscore, if I call it with a dataset that doesn't have chunks. Similar happens with Dataset.summary() for a dataset that doesn't have targets -- although here even setting the attr to None doesn't help.
We should have a more gentle way of dealing with such datasets. However, I'm not sure what that way would be, hence not "fixing" it right away.
to keep the "power" equivalent across voxels especially on boundaries
Otherwise we always have **kwargs -- look into base class.
Haven't looked into it yet. Just to not forget...
was suggested by Andy a while ago
Reason: Who knows what a mapper could be used for. postproc
feels more informative.
That is more appropriate for regression analyses and supervised algorithms in general.
so we should use a tuple when order is important, e.g. if leaf-classifiers need per-label specification - e.g. weights
came apparent while using zscore functionality with
param_est=('targets',['*'])
where there were no '*' targets
Wishlist from RP
Use case: For all ROIs do X
Expected change: a switch for vstack vs hstack should do it.
That will make RepeatedMeasure more similar to a Searchlight -- maybe it could become its baseclass.
do not forget to add support for gradients
Feature Suggestion:
Simple.
Store the value in voxels of applied mask as feature attribute.
might be a poor man solution for determining geodesic distance
Currently, it will cause problems if a Partitioner is used in conjunction with a Balancer and a Splitter. If some elements are marked as excluded (i.e. '0'), the splitter will nevertheless return this portion. If that happens in TransferMeasure, it might yield unintended behavior.
Ability to read surface data with topographic information:
Docstring:
...
It takes float in, so it's easy to assume it's radius in mm or something
Parameters
----------
radius : float
All features within this radius around the center will be part
of a sphere.
atm requires 0,1 -- use remapping available in 0.5 branch
Requested by Tara Gilliam
just enable testing of GPR in test_sensitivities to trigger an example
The code basically screams for it. Right now it produces a list of results that gets collapsed into a Dataset. Each iteration should be independent of all others -- we'd only need to have copies of the measures, and figure out whether a predictable order of iterations is necessary.
With parallelization in RepeatedMeasure, it becomes even more similar to Searchlight...
It would be really awesome if you awesome dudes could create a way to easily generate a matrix displaying the accuracies (or errors) of all possible two way classifications when one has a multi-class dataset.
I used cached kernel with searchlight like this:
kernel = LinearSGKernel(normalizer_cls=False)
kernel = CachedKernel(kernel)
clf = sg.SVM(svm_impl='libsvm', C=-1.0, kernel=kernel)
cvterr = CrossValidatedTransferError(TransferError(clf), splitter=NFoldSplitter())
sl = sphere_searchlight(cvterr, radius = 3, postproc = mean_sample())
When I ran it on dataset, it gave results that are different from when I don't use kernel argument in
clf creation.
The documentation should be restructured to provide an easier entrypoint into PyMVPA. The current examples should be transformed into an incremental tutorial and the remaining pieces into a reference manual describing the basic design.
Do we need slice-timing?
When to detrend and when not?
We should have a way to perform a cross-validated classification analysis with an embedded sensitivity analysis that works without training a classifier more than once per fold, while still allowing for assessment of Null distribution probabilities. Right now that is possible, but ugly and close to incomprehensible.
Once implemented this tool should be used to address issue #19, by showing how to do that from within NiPyPE workflows.
atm, weights, weights_labels seems to be of no effect for libsvm; for shogun's interface they are missing
we should get weights='auto' to balance automatically depending on the number of samples per class to avoid manual specification
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.