Comments (6)
Can you check that the datasets folder isn't empty? If that's the case there are three options:
- You could download the datasets repository using: 'git submodule update --init'. Sometimes this dosn't work, because bitbucket dosn't work well with huge datasets.
- Download the dataset repository as zip: https://bitbucket.org/zoqbits/benchmark-data/get/1f7770e39c61.zip
- Manually download the datasets you need: https://bitbucket.org/zoqbits/benchmark-data/src
from benchmarks.
Oh,yes, it was empty. I didn't even realized it. I downloaded the datasets and ran the benchmark again and now I do not have the No conversion possible
. However, I still get the following error on all datasets that are used for benchmarking LSH
[FATAL] Could not execute command: ['mlpack_lsh', '-r', 'datasets/wine.csv', '-v', '-k', '3', '-s', '42']
Any ideas how to resolve it?
from benchmarks.
Ah, I guess since mlpack changed the names from lsh
to mlpack_lsh
the auto detection doesn't work anymore. I'll go and fix this in the next days. However you can manually specify the mlpack binary path using the MLPACK_BIN
parameter:
make MLPACK_BIN=/usr/local/bin/ run
from benchmarks.
After running make MLPACK_BIN=/usr/local/bin/ run BLOCK=mlpack METHODBLOCK=LSH
command, I get similar error:
[FATAL] Could not execute command: ['/usr/local/bin/mlpack_lsh', '-r','datasets/wine.csv', '-v', '-k', '3', '-s', '42']
Now, I've ran the mlpack_lsh manually with the same parameters as the benchmark script: mlpack_lsh -r datasets/wine.csv -d distances.csv -n neighbours.csv -v -k 5 -s 42
. It seems that I am missing to provide the query file
[DEBUG] Compiled with debugging symbols.
[FATAL] Both --query_file and --k must be specified if search is to be done!
terminate called after throwing an instance of 'std::runtime_error'
what(): fatal error; see Log::Fatal output
Aborted (core dumped)
So, if I create a query_file where I specify at least one query point and then it executes without trouble
mlpack_lsh -r datasets/wine.csv -d distances.csv -n neighbours.csv -q datasets/wines_query_point.csv -v -k 5 -s 42
[DEBUG] Compiled with debugging symbols.
[INFO ] Using LSH with 10 projections (K) and 30 tables (L) with default hash width.
[INFO ] Loading 'datasets/wine.csv' as CSV data. Size is 13 x 178.
[INFO ] Loaded reference data from 'datasets/wine.csv' (13 x 178).
[INFO ] Hash width chosen as: 19.4285
[INFO ] Final hash table size: (5051 x 4)
[INFO ] Computing 5 distance approximate nearest neighbors.
[INFO ] Loading 'datasets/wines_query_point.csv' as CSV data. Size is 13 x 1.
[INFO ] Loaded query data from 'datasets/wines_query_point.csv' (13 x 1).
[INFO ] 1 distinct indices returned on average.
[INFO ] Neighbors computed.
[INFO ] Saving CSV data to 'distances.csv'.
[INFO ] Saving CSV data to 'neighbours.csv'.
[INFO ]
[INFO ] Execution parameters:
[INFO ] bucket_size: 500
[INFO ] distances_file: distances.csv
[INFO ] hash_width: 0
[INFO ] help: false
[INFO ] info: ""
[INFO ] input_model_file: ""
[INFO ] k: 5
[INFO ] neighbors_file: neighbours.csv
[INFO ] output_model_file: ""
[INFO ] projections: 10
[INFO ] query_file: datasets/wines_query_point.csv
[INFO ] reference_file: datasets/wine.csv
[INFO ] second_hash_size: 99901
[INFO ] seed: 42
[INFO ] tables: 30
[INFO ] verbose: true
[INFO ] version: false
[INFO ]
[INFO ] Program timers:
[INFO ] computing_neighbors: 0.000079s
[INFO ] hash_building: 0.143197s
[INFO ] loading_data: 0.001300s
[INFO ] saving_data: 0.000163s
[INFO ] total_time: 0.146325s
So the question is, does the benchmark script provides a query point file for the mlpack_lsh? It seems, it doesn't, can this be the issue?
from benchmarks.
Sorry for the slow response. You are right in the latest version you have to specify a query set. Which shouldn't be the case:
"You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set."
I'll will fix that in mlpack in the next days. What you could do in the meantime is to modify the mlpack lsh.py
script.
instead of using:
cmd = shlex.split(self.path + "mlpack_lsh -r " + self.dataset + " -v " + options)
we could write:
cmd = shlex.split(self.path + "mlpack_lsh -r " + self.dataset + " -q " + self.dataset + " -v " + options)
from benchmarks.
Fixed upstream in mlpack/mlpack@5bc514c#diff-1d8fe56c303ed01b74921339d84b2d3c
from benchmarks.
Related Issues (20)
- Contributing doc is necessary. HOT 1
- Audit benchmarking configuration scripts
- wine data downloading is defect HOT 1
- Weka does not use the predefined centroids HOT 2
- R is missing a kmeans benchmark
- Collect better error messages from failures
- Allow printing of all the metrics HOT 1
- Benchmarks sometimes show negative execution time HOT 3
- Visualisation isn't working HOT 1
- Some DTC benchmarks are missing from the webpage. HOT 6
- Difficulties of setting up local benchmark environment HOT 1
- Non uniformity in benchmarking scripts of Shogun HOT 6
- Build fails and exits on Fedora 28 without python33
- is this project still maintained by someone HOT 4
- Expanding the benchmark coverage of this repository for all the toolkits HOT 3
- Website appears to be missing many JS files HOT 8
- Installation Problem mlpy HOT 1
- Enhancements to benchmarking HOT 5
- Error in import timeout_decorator HOT 3
- MRPT ANN test occasionally fails
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchmarks.