fvisin / dataset_loaders Goto Github PK
View Code? Open in Web Editor NEWA collection of dataset loaders
License: GNU General Public License v3.0
A collection of dataset loaders
License: GNU General Public License v3.0
I got the following error in parrallel_loader
site-packages/dataset_loaders-1.0.0-py2.7.egg/dataset_loaders/images/citysca
pes.py", line 188, in init
super(CityscapesDataset, self).init(*args, **kwargs)
/python2.7/site-packages/dataset_loaders-1.0.0-py2.7.egg/dataset_loaders/parallel_loade
r.py", line 356, in init
raise RuntimeError('The name list cannot be empty')
RuntimeError: The name list cannot be empty
It would be great if data loaders were implemented in popular frameworks so that all the datasets here could be accessed with a provided adaptor. Instead of wrapping this data loader myself I could just call a function to generate a data loader for some dataset in some framework. For example these data loaders:
MXNet: http://mxnet.io/api/python/io.html
PyTorch: http://pytorch.org/docs/torchvision/datasets.html
TensorFlow: https://www.tensorflow.org/extend/new_data_formats
Fuel: https://fuel.readthedocs.io/en/latest/h5py_dataset.html
get_n_classes(), get_void_labels are not found. Kindly suggest
The data augmentation pipeline could be improved by relying on external tool such as
https://github.com/aleju/imgaug that provide a much more comprehensive support to transformations and noise injection. Unfortunately I don't have time to work on this myself at the moment, but if someone wants to volunteer I can provide some support on how to do it.
Hi, i am new to computer vision and Deep Learning, i want to realize a project that consists on " segmenting the moving object from the background in video with deep learning methods".
for that i used DAVIS and CDnet dataset, i downloaded the data but i have no idea how to organize it into (x_train, y_train, x_test, y_test) for training and testing own system mainly based on Python, (Keras, CNN, ...).
I did not understand the steps.
I clone this repository in my '/home/mountain/GitHub/' dictionary. Then, I use below commands to install the dataset_loaders:
(biomediclasagne) mountain@Mountain:~$ cd GitHub/
(biomediclasagne) mountain@Mountain:~/GitHub$ ls
dataset_loaders
DSB3Tutorial
FC-DenseNet
ipywidgets
Medical-Image-Analysis-IPython-Tutorials
Medical-Image-Analysis-IPython-Tutorials-master.zip
models
SimpleITK-Notebooks
SimpleITKTutorialMICCAI2015
TensorFlow-Examples
tensorflow_notes
(biomediclasagne) mountain@Mountain:~/GitHub$ pip install --user -e dataset_loaders
I have not got any error, but when I try to import some something from dataset_loaders, I got some error:
In [1]: from dataset_loaders.images.camvid import CamvidDataset
fatal: Not a git repository (or any of the parent directories): .git
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
<ipython-input-1-d1deaab6f518> in <module>()
----> 1 from dataset_loaders.images.camvid import CamvidDataset
/home/mountain/GitHub/dataset_loaders/dataset_loaders/__init__.pyc in <module>()
15
16 __version__ = check_output('git rev-parse HEAD',
---> 17 shell=True).strip().decode('ascii')
/home/mountain/anaconda3/envs/biomediclasagne/lib/python2.7/subprocess.pyc in check_output(*popenargs, **kwargs)
217 if cmd is None:
218 cmd = popenargs[0]
--> 219 raise CalledProcessError(retcode, cmd, output=output)
220 return output
221
CalledProcessError: Command 'git rev-parse HEAD' returned non-zero exit status 128
It would be helpful to add a flag to preload the dataset in memory.
Follow implementation in #3
I am opening this issue as a reminder that it would be very useful to collect some statistic on the usage of the queues and of the threads, to verify if they can be optimized.
Hi, @fvisin ,
Thanks for releasing this useful package. I followed the install instruction. However, when I import dataser_loaders
, I got the following error in parallel_loader.py as the following. (Currently, I only need to load Camvid dataset)
(tf_1.0) root@milton-All-Series:/data/code/dataset_loaders/dataset_loaders# python
Python 3.5.2 | packaged by conda-forge | (default, Jan 19 2017, 15:28:33)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dataset_loaders
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/code/dataset_loaders/dataset_loaders/__init__.py", line 3, in <module>
from images.camvid import CamvidDataset # noqa
File "/data/code/dataset_loaders/dataset_loaders/images/camvid.py", line 5, in <module>
from dataset_loaders.parallel_loader import ThreadedDataset
File "/data/code/dataset_loaders/dataset_loaders/parallel_loader.py", line 519
raise data_batch[0], data_batch[1], data_batch[2]
^
SyntaxError: invalid syntax
Here is the setting in my config.ini
[camvid]
shared_path = /data/code/SegNet-Tutorial/
The Camvid dataset is located in the following folder structures (screen):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.