angleto / liblinear Goto Github PK
View Code? Open in Web Editor NEWThis project forked from cjlin1/liblinear
License: BSD 3-Clause "New" or "Revised" License
This project forked from cjlin1/liblinear
License: BSD 3-Clause "New" or "Revised" License
From @sabu3 on March 4, 2016 9:27
When I use L1R_LR, a discrimination ratio changes depending on a normalization method.
I tried two normalization method.
I read the program, but I did not find a reason.
Copied from original issue: cjlin1#19
From @enmengyi on July 30, 2016 13:50
In the Ubuntu terminal, I typed:
./train -s 2 -v 5 -e 0.001 -q train1.txt model1
Where "train1.txt" is my train sample file .
The result is:
Cross Validation Accuracy = 91.5398%
But I didn't find any file named "model1" in the current directory. What's the matter?
Copied from original issue: cjlin1#25
From @johnny5550822 on May 16, 2017 18:11
Here is the issue, how can I saolve it?
Invalid MEX-file
'/home/kcho/Dropbox/ML/mii-stroke-deeplearn/013017_stroke_tissue_fate/liblinear-multicore/matlab/train.mexa64':
dlopen: cannot load any more object with static TLS.
Copied from original issue: cjlin1#33
From @CooledCoffee on March 18, 2015 3:32
svm-train -s 3 -t 0 -c 1 -p 0.1 -e 0.001 -h 0 eunite2001 model.1 && svm-predict eunite2001.t model.1 prediction.1
liblinear-train -s 11 -c 1 -p 0.1 -e 0.001 eunite2001 model.11 && liblinear-predict eunite2001.t model.11 prediction.11
liblinear-train -s 12 -c 1 -p 0.1 -e 0.001 eunite2001 model.12 && liblinear-predict eunite2001.t model.12 prediction.12
liblinear-train -s 13 -c 1 -p 0.1 -e 0.001 eunite2001 model.13 && liblinear-predict eunite2001.t model.13 prediction.13
3. The results are here:
libsvm | liblinear -s 11 | liblinear -s 12 | liblinear -s 13 |
---|---|---|---|
754.219 | 711.818 | 714.293 | 655.209 |
735.951 | 695.675 | 703.196 | 651.262 |
745.716 | 606.048 | 601.496 | 628.192 |
756.885 | 721.134 | 721.481 | 652.914 |
758.048 | 704.657 | 705.966 | 644.363 |
758.296 | 703.099 | 703.878 | 644.147 |
756.88 | 680.706 | 688.226 | 629.164 |
753.174 | 681.003 | 682.531 | 631.114 |
733.147 | 666.063 | 668.37 | 617.042 |
743.909 | 606.234 | 599.665 | 605.601 |
... | ... | ... | ... |
Copied from original issue: cjlin1#10
From @seanv507 on March 26, 2017 18:34
Hi, I would like to work on merging the sample weights version. Can you provide some guidance in terms of your own requirements?
My final goal is having a sample weights version in R (based on https://cran.r-project.org/web/packages/LiblineaR/index.html)
I have done the majority of the work,
but there are a few sticking points:
a) adding sample weights 'breaks' the interface for eg matlab and python versions
b) should one change the code to always use sample weights, or is the added computational cost too great?
c) currently I have merged the c++ code using conditional compilation because of b), however, this does not work for python and R (no conditional compilation), so it raises the worry of python/R code calling the library with the wrong compilation options.
eg I could
Copied from original issue: cjlin1#32
From @larsmans on August 1, 2014 9:23
This plugs a lot of potential memory leaks: if, in
double *p = new double[n];
double *q = new double[n];
the second allocation fails, the result of the first is not automatically cleaned up. With vectors, the cleanup is automatic.
Copied from original issue: cjlin1/pull/4
From @Mensen on January 10, 2016 14:13
I'm using Matlab 2015b. My compiler seems to working properly as I've compiled other .c files and the make
command returns without errors. I don't remember experiencing these problems on my last computer setup which used Matlab 2015a.
Thanks for your help!
Invalid MEX-file
'/home/mensen/matlab_toolboxes/liblinear-multicore-2.1-2/matlab/train.mexa64':
dlopen: cannot load any more object with static TLS
Error in mvpa_train>classif (line 39)
model = train(Y, sparse(double(X)), ['-s '
type ' -q -c ', num2str(best_lambda)]);
Copied from original issue: cjlin1#17
From @jm-huang on April 19, 2016 9:40
I am a little confusing while using this package for multi-classification. can anyone tell me how to do it ? Thanks.
what i had try:
train_labels=[[1,2], [2], [3]]
train_datas = [[1,1,0], [1,2,2], [1,1,1]]
prob = problem(train_labels, train_datas)
param = parameter('-s 0')
model = train(prob, param)
but it arise some errors:
Traceback (most recent call last):
File "C:\Users\Jiaming\Dropbox\Internship in ADSC\DeepWalk\experiments\classifier.py", line 69, in process
prob = problem(train_labels, train_datas)
File "C:\Users\Jiaming\Anaconda2\lib\site-packages\liblinear-210-py2.7.egg\liblinear\liblinear.py", line 107, in init
for i, yi in enumerate(y): self.y[i] = y[i]
TypeError: a float is required
Copied from original issue: cjlin1#21
From @MarvinT on September 26, 2016 19:5
When using scikit learn, Logistic Regression never returns on fitting with nearly degenerate data.
Scikit learn passed the blame on to liblinear.
import sklearn.linear_model
import numpy as np
model = sklearn.linear_model.LogisticRegression()
num_pts = 15
x = np.zeros((num_pts*2, 2))
x[3] = 3.7491010398553741e-208
y = np.append(np.zeros(num_pts), np.ones(num_pts))
model.fit(x, y)
Return or throw error.
Never returns.
Linux-2.6.32-573.18.1.el6.x86_64-x86_64-with-redhat-6.7-Carbon
('Python', '2.7.12 |Anaconda 2.0.1 (64-bit)| (default, Jul 2 2016, 17:42:40) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]')
('NumPy', '1.11.0')
('SciPy', '0.17.0')
('Scikit-Learn', '0.17.1')
Copied from original issue: cjlin1#27
From @cheeyeo on October 15, 2014 14:26
I'm running liblinear on text classification using -s 0 and -s 6 in order to get probability estimates of a multiclass classification task.
I read through the guides, sites, documents, feature scaling etc but the accuracy of each classifier is always low ? e.g. 14%
How do I further improve the accuracy of the classifier?
Any help most useful
Copied from original issue: cjlin1#7
From @JordanCheney on May 25, 2016 18:59
Hello,
I am using liblinear to do an SVM classification and I am seeing output during training that I don't understand. Specifically,
...
iter 259 act 2.666e-02 pre 2.666e-02 delta 1.443e-03 f 8.453e+03 |g| 3.771e+01 CG 2
cg reaches trust region boundary
iter 260 act 3.524e-02 pre 3.522e-02 delta 1.451e-03 f 8.453e+03 |g| 7.068e+01 CG 3
cg reaches trust region boundary
iter 261 act 2.766e-02 pre 3.918e-02 delta 1.143e-03 f 8.453e+03 |g| 5.879e+01 CG 3
cg reaches trust region boundary
iter 262 act 3.855e-02 pre 3.855e-02 delta 1.299e-03 f 8.453e+03 |g| 1.061e+02 CG 3
cg reaches trust region boundary
iter 263 act 2.558e-02 pre 2.558e-02 delta 1.316e-03 f 8.453e+03 |g| 4.086e+01 CG 2
cg reaches trust region boundary
iter 264 act 3.885e-02 pre 3.885e-02 delta 1.442e-03 f 8.453e+03 |g| 1.712e+02 CG 3
...
I haven't been able to find documentation anywhere for what these values mean. Is it bad that the "trust boundary" is reached? Does it mean training isn't working? In the first few iterations the trust boundary is not always reached but later in training it seems to be. Is there a resource anywhere that can help me understand?
Thanks!
Jordan
Copied from original issue: cjlin1#23
From @itgandhi on November 3, 2017 11:16
I am computer science student from India.
I am used to play with SVM implementation of liblinear from sklearn library in python.
but recently I started converting my code from python to C++ and used LIBSVMs C_SVC it works perfectly for me giving me above 97% of accuracy.
But my data set is very large and training time is very slow on LIBSVM so I moved on LIBLINEAR to obtain multi core performance for training. and it is creating more furious problem for me that I am getting accuracy only around 15%.
DATASET:
2,50,000 Images of 7 different classes
dimension 128 X 128 px
calculate HOG features of all images, length of 1 feature vector is 1296
X* = 250000 x 1296
Y = 250000
whole data set is normalised in 0-1 range.
I am not using command line interface of LIBLINEAR because training file is getting very big in GBs.
I am including liblinear and performed all necessary steps in order to use all the classes and functions of it.
now I have to classify all images into 7 different classes
I am using param.s=2 param.e=0.0001 don't need to set weight of different classes
and perform cross fold validation 70 for 2,50,000 images to find value of C
it gives me value of C about 4.76837e-07 and CV accuracy = 16.3265%
what should I do??
If I made any mistake please direct me on the correct path. thank you.
Copied from original issue: cjlin1#39
From @larsmans on August 1, 2014 9:21
Here's a patch from the scikit-learn fork of Liblinear. It doesn't do much, but it shuts up the compiler.
Copied from original issue: cjlin1/pull/3
From @trivedigaurav on July 16, 2014 22:53
Build fails on macs.
Copied from original issue: cjlin1/pull/2
From @TheEdoardo93 on October 20, 2017 9:9
Hi, I'm a computer science student based in Milan.
I want to know if I can use this library (especially, with the Python interface/wrapper) for the ranking task. I want to learn a ranking function in Learning to Rank style.
It is possible?
Thanks for the answer!
Copied from original issue: cjlin1#38
From @myeung2 on January 24, 2017 20:3
Hi, thanks for a great tool. With regard to this compilation warning:
linear.cpp: In function `void train_one(const problem*, const parameter*, double*, double, double)':
linear.cpp:1365: warning: 'loss_old' might be used uninitialized in this function
Can I just initialize "loss_old" to 0? Thanks.
Copied from original issue: cjlin1#31
From @CharlesMei on July 25, 2015 19:1
I want to use python interface for 64bit python in windows, but the liblinear.dll in the /windows directory seems to be 32bit one. And i can not generate 64bit dll by "nmake -f Makefile.win clean all" , this just generate exe file in /windows directory. So how can i do that?
Copied from original issue: cjlin1#12
From @simsong on June 3, 2014 17:48
This pull request adds the support for OpenMP that is documented on the developer's website. It also adds initializations for several variables that are reported as possibly not initialized using some compilers.
Copied from original issue: cjlin1/pull/1
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.