kuleshov / cdhit Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/cdhit
License: GNU General Public License v2.0
Automatically exported from code.google.com/p/cdhit
License: GNU General Public License v2.0
What steps will reproduce the problem?
1. cd-hit-est -i all-contigs.fa -o all-contigs.fa.cdhit.out -M 2000 -c 0.99
What is the expected output? What do you see instead?
It turns out fine on at least half of our datasets, still there are a couple of
input datasets will create this kind of errors.
What version of the product are you using? On what operating system?
CD-HIT 4.5.8 on Linux 2.6.18-274.12.1.el5 #1 SMP x86_64 GNU/Linux
Please provide any additional information below.
cd-hit-est -i all-contigs.fa -o all-contigs.fa.cdhit.out -M 2000 -c 0.99
================================================================
Program: CD-HIT, V4.5.8 (+OpenMP), Apr 06 2012, 00:18:06
Command: cd-hit-est -i all-contigs.fa -o
all-contigs.fa.cdhit.out -M 2000 -c 0.99
Started: Tue Apr 24 12:08:35 2012
================================================================
Output
----------------------------------------------------------------
total seq: 57483
longest and shortest : 60345 and 200
Total letters: 52094767
Sequences have been sorted
Approximated minimal memory consumption:
Sequence : 58M
Buffer : 1 X 18446744071796M = 18446744071796M
Table : 1 X 17M = 17M
Miscellaneous : 4M
Total : 18446744071877M
Fatal Error:
not enough memory, please set -M option greater than 18446744071927
Program halted !!
Original issue reported on code.google.com by [email protected]
on 24 Apr 2012 at 4:21
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
Original issue reported on code.google.com by [email protected]
on 14 Sep 2012 at 11:54
What steps will reproduce the problem?
1. choose a FASTA file with long headers as infile.fasta
1. run 'cd-hit -i infile.fasta -o outfile.fasta'
2. run 'cd-hit -i infile.fasta -o outfile2.fasta -d 100'
3. compare outfile.fasta.clstr with outfile2.fasta.clstr (e.g. in linux 'diff
outfile{'',2}.fasta.clstr')
What is the expected output? What do you see instead?
Because of the -d 100 paramter, outfile2.fasta.clstr should contain a larger
part of each header instead of just the first characters.
Both, outfile.fasta.clstr and outfile2.fasta.clstr contain only a very short
part of each header, e.g.:
>Cluster 0
0 69aa, >1cc1_L... *
>Cluster 1
0 61aa, >2wpn_B... *
for these infile.fasta entries:
>1cc1_L Hydrogenase (large subunit); NI-Fe-Se hydrogenase, oxidoreduct
(111-179:497)
QSHILHFYHLAALDYVKGPDVSPFVPRYANADLLTDRIKDGAKADATNTYGLNQYLKALEIRRICHEMV
>2wpn_B Periplasmic [nifese] hydrogenase, large subunit, selenocystein
(116-176:494)
QSHILHFYHLSAQDFVQGPDTAPFVPRFPKSDLRLSKELNKAGVDQYIEALEVRRICHEMV
What version of the product are you using? On what operating system?
CD-HIT version 4.5.4 (built on May 31 2011)
Kubuntu 11.10 64bit
Original issue reported on code.google.com by [email protected]
on 1 Feb 2012 at 11:58
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
Original issue reported on code.google.com by [email protected]
on 14 Sep 2012 at 11:50
What steps will reproduce the problem?
1.unpack
2.make
3.
What is the expected output? What do you see instead?
I am able to compile v4.2 but v4.2.1 fails with the following error:
g++ -O2 -DNO_OPENMP cdhit-common.c++ -c
cdhit-common.c++: In member function ‘int WordTable::CountWords(int,
Vector<int>&, Vector<unsigned int>&, NVector<unsigned int>&, NVector<unsigned
int>&, NVector<unsigned int>&, NVector<unsigned char>&, bool)’:
cdhit-common.c++:1251: error: ‘CHAR_BIT’ was not declared in this scope
cdhit-common.c++: In member function ‘int WordTable::CountWords(int,
Vector<int>&, Vector<unsigned int>&, NVector<IndexCount>&, NVector<unsigned
int>&, NVector<unsigned int>&, NVector<unsigned int>&, NVector<unsigned int>&,
bool, int)’:
cdhit-common.c++:1320: error: ‘CHAR_BIT’ was not declared in this scope
cdhit-common.c++: In member function ‘int WordTable::CountWords(int,
Vector<int>&, Vector<unsigned int>&, NVector<IndexCount>&,
NVector<IndexCount*>&, NVector<unsigned int>&, NVector<unsigned int>&,
NVector<unsigned int>&, bool, int)’:
cdhit-common.c++:1376: error: ‘CHAR_BIT’ was not declared in this scope
make: *** [cdhit-common.o] Error 1
What version of the product are you using? On what operating system?
v4.2.1 on SUSE Linux Enterprise Server 10
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 28 Sep 2010 at 9:12
What steps will reproduce the problem?
1.Hi, i am installing cd-hit with openmp=yes option for nucleotide sequence
clustering. it seems, it compile properly and run Ok but when i am using -T
option for threads. it gives error!!
Program: CD-HIT, V4.6 (+OpenMP), Apr 25 2014, 16:47:37
Command: ./cd-hit-est -i GH48.fasta -o sandeep -c 0.9 -n 10 -M
2000 -T 2
Started: Fri Apr 25 16:50:02 2014
================================================================
Output
----------------------------------------------------------------
total seq: 464
longest and shortest : 2970010 and 396
Total letters: 6460438
Sequences have been sorted
Segmentation fault
What is the expected output? What do you see instead?
i am expecting thread option should work and use number of cpus
What version of the product are you using? On what operating system?
CD-HIT, V4.6
Please provide any additional information below.
plz help me out
Original issue reported on code.google.com by [email protected]
on 25 Apr 2014 at 2:58
What steps will reproduce the problem?
1.create a set of references
2.pick a few fasta records from the references and save into separate fasta
file. This file will serve as db2 (= unclustered).
3. Add a smaller fasta record on top of db2
The expected output is that all sequences (except maybe the unreleted first
record) are added to the existing clusters. Instead, none of them is added.
CD-HIT version 4.6 (built on Nov 28 2012) on linux.
Possible fix (see attached patch)
Original issue reported on code.google.com by [email protected]
on 15 May 2013 at 2:49
Attachments:
What steps will reproduce the problem?
1.When I used from provean.sh. CD-HIT was called with option " -i
/tmp/proveanEv9aLE/tmp.fasta -o /tmp/proveanEv9aLE/cdhit.cluster -c 0.75 -s 0.3
-n 5 -l 158 -bak 1"
2.CD-HIT output "comparing sequences from 0 to 0" forever.
3.The input fasta file is attached this.
What is the expected output? What do you see instead?
I want to the result like normal clustering output, but If I use this input
fasta file, I can not get the result.
What version of the product are you using? On what operating system?
====== CD-HIT version 4.6 (built on Feb 3 2013) ======
Redhat Enterprise Linux 5
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 29 Aug 2013 at 4:47
Attachments:
What steps will reproduce the problem?
1. ./cd-hit -i sprot_archaea -o sprot_archaea.clust80 -c 0.8 -n 5
2. ./cd-hit -i sprot_archaea.clust80 -o sprot_archaea.clust60 -c 0.6 -n 4
3 perl psi-cd-hit.pl -i /pathto/archaea/sprot_archaea.clust60 -o
/pathto/archaea/sprot_archaea.clust30 -c 0.3
What is the expected output? What do you see instead?
Undefined subroutine &PSI_CDHIT::open_LOG called at
/pathto/programs/cdhit/psi-cd
-hit.pl line 16.
What version of the product are you using? On what operating system?
cd-hit-v4.6.1-2012-08-27
unix server
Please provide any additional information below.
we provided the path to blast, cdhit, etc in the .bashrc file
Original issue reported on code.google.com by [email protected]
on 28 Oct 2014 at 3:43
What steps will reproduce the problem?
1. download the cd-hit-v4.5.4-2011-03-07.tgz file from website
2. tar xvf cd-hit-v4.5.4-2011-03-07.tgz --gunzip
3. make openmp=yes
What is the expected output? What do you see instead?
expect the program executables to install properly
instead I get some error messages in log and executables have not been
installed in cd-hit folder - at least, I do not see any of the 8 files with .o
extension (cdhit.o, cdhit-div.o etc), nor any of the 6 executable files
(cd-hit, cd-hit-2d, cd-hit-div etc)
here is the installation log
cdhit-common.c++: In constructor ‘TempFile::TempFile(const char*)’:
cdhit-common.c++:68:34: warning: format ‘%x’ expects argument of type
‘unsigned int’, but argument 3 has type ‘TempFile*’ [-Wformat]
cdhit-common.c++: In member function ‘size_t SequenceDB::MinimalMemory(int,
int, int, const Options&)’:
cdhit-common.c++:2258:47: warning: format ‘%i’ expects argument of type
‘int’, but argument 3 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2262:71: warning: format ‘%i’ expects argument of type
‘int’, but argument 4 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2262:71: warning: format ‘%i’ expects argument of type
‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2266:78: warning: format ‘%i’ expects argument of type
‘int’, but argument 4 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2266:78: warning: format ‘%i’ expects argument of type
‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2271:52: warning: format ‘%i’ expects argument of type
‘int’, but argument 3 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2274:51: warning: format ‘%i’ expects argument of type
‘int’, but argument 3 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++:2279:30: warning: format ‘%i’ expects argument of type
‘int’, but argument 3 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++: In member function ‘void SequenceDB::DoClustering(int,
const Options&)’:
cdhit-common.c++:2476:90: warning: format ‘%i’ expects argument of type
‘int’, but argument 2 has type ‘std::vector<Sequence*,
std::allocator<Sequence*> >::size_type {aka long unsigned int}’ [-Wformat]
cdhit-common.c++:2487:65: warning: format ‘%i’ expects argument of type
‘int’, but argument 2 has type ‘size_t {aka long unsigned int}’
[-Wformat]
cdhit-common.c++: In member function ‘void SequenceDB::DoClustering(const
Options&)’:
cdhit-common.c++:2994:65: warning: format ‘%i’ expects argument of type
‘int’, but argument 2 has type ‘size_t {aka long unsigned int}’
[-Wformat]
In file included from cdhit-common.c++:28:0:
cdhit-common.h: In instantiation of ‘void Vector<TYPE>::Append(const TYPE&)
[with TYPE = NVector<long int>]’:
cdhit-common.c++:704:27: required from here
cdhit-common.h:95:4: error: ‘push_back’ was not declared in this scope, and
no declarations were found by argument-dependent lookup at the point of
instantiation [-fpermissive]
cdhit-common.h:95:4: note: declarations in dependent base
‘std::vector<NVector<long int>, std::allocator<NVector<long int> > >’ are
not found by unqualified lookup
cdhit-common.h:95:4: note: use ‘this->push_back’ instead
cdhit-common.h: In instantiation of ‘void Vector<TYPE>::Append(const TYPE&)
[with TYPE = NVector<int>]’:
cdhit-common.c++:705:25: required from here
cdhit-common.h:95:4: error: ‘push_back’ was not declared in this scope, and
no declarations were found by argument-dependent lookup at the point of
instantiation [-fpermissive]
cdhit-common.h:95:4: note: declarations in dependent base
‘std::vector<NVector<int>, std::allocator<NVector<int> > >’ are not found
by unqualified lookup
cdhit-common.h:95:4: note: use ‘this->push_back’ instead
cdhit-common.h: In instantiation of ‘void Vector<TYPE>::Append(const TYPE&)
[with TYPE = Sequence*]’:
cdhit-common.c++:1247:24: required from here
cdhit-common.h:95:4: error: ‘push_back’ was not declared in this scope, and
no declarations were found by argument-dependent lookup at the point of
instantiation [-fpermissive]
cdhit-common.h:95:4: note: declarations in dependent base
‘std::vector<Sequence*, std::allocator<Sequence*> >’ are not found by
unqualified lookup
cdhit-common.h:95:4: note: use ‘this->push_back’ instead
cdhit-common.h: In instantiation of ‘void Vector<TYPE>::Append(const TYPE&)
[with TYPE = int]’:
cdhit-common.c++:1895:26: required from here
cdhit-common.h:95:4: error: ‘push_back’ was not declared in this scope, and
no declarations were found by argument-dependent lookup at the point of
instantiation [-fpermissive]
cdhit-common.h:95:4: note: declarations in dependent base ‘std::vector<int,
std::allocator<int> >’ are not found by unqualified lookup
cdhit-common.h:95:4: note: use ‘this->push_back’ instead
make: *** [cdhit-common.o] Error 1
What version of the product are you using? On what operating system?
cd-hit-v4.5.4-2011-03-07.tgz downloaded 19 March 2013
OS = Linux Mint 14 (Nadia)
64 bit
Please provide any additional information below.
The same command on a Linux Mint 12 machine installs the program correctly. So
it is an issue of upgrade in gcc (Mint14 has gcc 4.7.2, Mint12 has 4.6.1), or
any other component involved in the correct installation?
Original issue reported on code.google.com by [email protected]
on 19 Mar 2013 at 3:46
What steps will reproduce the problem?
1. Run cd-hit-2d or cd-hit-est-2d where a sequence in -i is a subsequence of a
sequence in -i2 and options -s2 1.0 and -S2 999999.
What is the expected output? What do you see instead?
Sequence in -i2 should be clustered into the subsequence in -i. Instead, it
behaves as if -s2 and -S2 are defaults, which requires that -i2 sequences be
equal or shorter than -i sequences.
What version of the product are you using? On what operating system?
Version: cd-hit-v4.6.1-2012-08-27
OS: Linux 64-bit 2.6.18-238.el5
Please provide any additional information below.
I've broken it down to a single sequence in -i and a single subsequence of the
one in -i in -i2. I have tried both cd-hit-2d and cd-hit-est-2d. They both
seem to ignore the -s2 and -S2 options.
Thanks.
Original issue reported on code.google.com by [email protected]
on 4 Sep 2012 at 9:17
What steps will reproduce the problem?
1. run the script
What is the expected output? What do you see instead?
Each position will have accurate average quality calculated based on coverage
at that position. Instead, positions that have low coverage will have much
lower average quality values. Also, the conversion of quality chars to phred
scores could be optimized.
What version of the product are you using? On what operating system?
cd-hit-otu-illumina-0.0.1, CentOS 5.6 x86_64
Please provide any additional information below.
I've rewritten and tested these portions of the code. See below:
Current:
for ($i=0; $i<$lena; $i++) { my $c1 = ord(substr($quaa,$i,1))-$offset; $score_array[$i] += $c1; }
$score_count++;
New:
@qual = split("", $quaa);
@phred = map(ord($_)-$offset, @qual);
$pos = 0;
foreach (@phred) {
$score_array[$pos] += $_;
$score_counts[$pos] += 1;
$pos += 1;
}
process_score("$output.err", @score_counts, @score_array);
As such, the process_score method also needs a little tweaking to:
sub process_score {
my ($output, @score_counts, @scores) = @_;
my $len = $#scores+1;
my ($i, $j, $k);
open(SSS, "> $output") || die "can not write $output\n";
for ($i=0; $i<$len; $i++){
my $ave = int($scores[$i]/$score_counts[$i]);
my $e = 1 / (10 ** ($ave/10));
print SSS "$i\t$ave\t$e\n";
}
close(SSS);
}
Original issue reported on code.google.com by [email protected]
on 5 Apr 2013 at 3:06
Attachments:
What steps will reproduce the problem?
1. file1.clstr = 7.0 Gb, clustered about 143 million DNA seq reads into 41 040
442 clusters
2. file2.clstr = 3.6 Gb, output from CD-HIT-EST-2D, adding 21 million "shared"
reads (the "new, unique" reads are in another file)
3. clstr_merge.pl file1.clstr file2.clstr > mergefile.clstr
What is the expected output? What do you see instead?
expected output should be a file with 164 million reads. Instead, I get a file
representing about 150 million reads, thus about 14 million out of 21 million
reads have gone missing. The mergefile.clstr is 7.3 Gb
What version of the product are you using? On what operating system?
CD-HIT4.5.4 on Linux Mint12, computer has two quadcore processors, 30Gb
available RAM
Please provide any additional information below.
I tried to trace the source of the error, but I am not sure I found it.
At some point there are about 1,100 clusters in the file2.clstr where there is
no new sequence added (i.e. a series of lines with >Cluster24966989 followed by
a line with 0 188nt, >seqID_same_as_in_file1.clstr... * and no following
line(s) with 1 188nt, >seqID_new_from_file2.clstr... +/100.00%). When then
finally appears a set of lines including some seqIDs from the new dataset, they
are not added to the new file. At least from that point onwards when I compare
the file1, file2 and mergefile, I do not see any sequence from file2 that has
been added.
I suspect that some sequences have been omitted in earlier parts too, but the
files are too big to systematically trace whether and where there have been
additional omission in the earlier part of the file. Since only about 6 million
sequences have been add to the first 24 million clusters, I am rather sceptical
that the missing 14 million reads should be added to the last 17 million
clusters.
Some random checking of clusters comparing the file1, file2 and mergefile did
not show any abnormalities in the earlier part of the result.
Is there some limitation on the number of lines that clstr_merge.pl can handle?
Original issue reported on code.google.com by [email protected]
on 2 Oct 2012 at 10:53
What steps will reproduce the problem?
1. Download and uncompress
http://genome.jgi-psf.org/Selmo1/download/Selmo1_GeneModels_FilteredModels3_aa.f
asta.gz
2. Download and uncompress
ftp://ftp.ensemblgenomes.org/pub/release-17/plants//fasta/selaginella_moellendor
ffii/pep/Selaginella_moellendorffii.v1.0.17.pep.all.fa.gz
3.cd-hit-2d -i Selaginella_moellendorffii.v1.0.17.pep.all.fa -i2
Selmo1_GeneModels_FilteredModels3_aa.fasta -c 0.99 -n 5 -o Models3unique.fa -M
2000
What is the expected output? What do you see instead?
I see Segmentation fault: 11
What version of the product are you using? On what operating system?
cd-hit-v4.6.1-2012-08-27 on OSX 10.8.2 Build 12C60
Please provide any additional information below.
cd-hit-v4.5.4-2011-03-07 from Bioinformatics.org works fine.
Original issue reported on code.google.com by [email protected]
on 20 Feb 2013 at 6:42
What steps will reproduce the problem?
1.cd-hit-454 command line
2.
3.
What is the expected output? What do you see instead?
The expected output is that the sequences be clustered (I've been able to do it
before) and the output is an explanation of cd-hit-454 options
What version of the product are you using? On what operating system?
cd-hit-v4.6.1-2012-08-27, 64 bit Ubuntu 12.04 LTS
Please provide any additional information below.
PLEASE help me ASAP with this! I can't process my data! And when I tried
installing cd-hit on another computer, the make command doesn't work (issue
16)! very frustrating...
Original issue reported on code.google.com by [email protected]
on 29 May 2013 at 2:07
command:
cd-hit-para.pl -i ~/data-folder/data-file.fasta -o ~/data-folder/data-file_NR
-r 1 -d 0 -c 1.0 -n 10 --L 10 --S 64 --R --P cd-hit-est
What is the expected output? What do you see instead?
Expect the program to start clustering on local computer (using 10 out of 12
available CPUs).
Instead I get an error message "no host at ./cd-hit-para.pl line 97"
What version of the product are you using? On what operating system?
CD-HITv4.6.1 on Linux Mint 14, 64 bit
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 5 Apr 2013 at 4:40
What steps will reproduce the problem?
1. make openmp=yes debug=yes
2.
3.
What is the expected output? What do you see instead?
the expected output is the building of the binary and what I see instead is:
g++ fopenmp ggdb cdhit-common.c++ -c
make:g++ did not find the program
make:*** [cdhit-common.o] Error 127
What version of the product are you using? On what operating system?
cd-hit-v4.6.1-2012-08-27
Please provide any additional information below.
I was able to build it before in another computer and I think there's a problem
with the unpacking of the .tgz because when I compare the list of unpacked
files in one case and another, there're less files in the computer I can't
build cd-hit. The command I'm using to unpack is: tar xvzf
PLEASE help me with this... it's driving me crazy!
Original issue reported on code.google.com by [email protected]
on 29 May 2013 at 2:00
What steps will reproduce the problem?
1. Compiling the code on gcc 4.5.2 produces numerous formatting warnings.
2. Running cd-hit always shows a CPU time of 0 at the end of the output.
I've fixed all compiler warnings by altering printf formatting to the
appropriate characters (for instance, a struct pointer should be printed using
%p, not %x, even though it is hexadecimal).
I've also fixed the CPU time bug simply by changing the value of the computed
time into double, rather than int. Naturally, when one divides two millisecond
values obtained from the tms struct by the defined conversion of 100 to
seconds, one loses precision, so the time is truncated. As my calculations are
all pretty quick, the CPU time should be 0.something, instead of just 0.
It seems to work properly now. I've thoroughly tested it against a wide variety
of data sets and my code performs exactly like your original code without the
CPU time issue or the compiler warnings.
Please apply my attached patch to reproduce my fixes.
Hope this helps!
Regards,
Varun
The Division of Structural Biology
University of Oxford
Original issue reported on code.google.com by [email protected]
on 1 Jun 2011 at 2:42
Attachments:
Hi
The program shows strange characters ('zu') for RAM and 'Max number of
representatives'. Please see the log below.
What ca cause this?
Is it a bug?
================================================================
Program: CD-HIT, V4.6, Sep 01 2014, 17:43:15
Command: cdhitest.exe -l 10 -c
0.9 -n 7 -g 1 -r 0 -G 1 -B 0 -M 1024 -T 0 -d 20 -p 1
-b 20 -t 2 -i
c:\MyProjects\Biology\CDHit\output corect\0250K.fasta
-o out.fasta
================================================================
Output
----------------------------------------------------------------
Option -T is ignored: multi-threading with OpenMP is NOT enabled!
total seq: 1404
longest and shortest : 438 and 50
Total letters: 209001
Sequences have been sorted
Approximated minimal memory consumption:
Sequence : zuM
Buffer : 1 X zuM = zuM
Table : 1 X zuM = zuM
Miscellaneous : zuM
Total : zuM
Table limit with the given memory limit:
Max number of representatives: zu
Max number of word counting entries: zu
comparing sequences from 0 to 1404
.
1404 finished 1401 clusters
Apprixmated maximum memory consumption: 14M
writing new database
writing clustering information
program completed !
Total CPU time 0.12
Original issue reported on code.google.com by [email protected]
on 3 Sep 2014 at 11:44
Dear Weizhing Li's Group and other users,
I have downloaded varies versions of your program to my 32bit Win7 operation
system but failed in installation.
After downloading the program and attempting to compile this happens:
C:\cd-hit-v4.5.7-2011-12-16>make
'make' is not recognized as an internal or external command, operable program
or bathfile
Do you any suggestions as to what I am doing wrong?
Best regards, Rikki from University of Copenhagen, Denmark
Original issue reported on code.google.com by [email protected]
on 16 Oct 2014 at 9:38
What steps will reproduce the problem?
1.extract g++ -O2 -DNO_OPENMP cdhit-common.c++ -c
2.make
What is the expected output? What do you see instead?ake
g++ -O2 -DNO_OPENMP cdhit-common.c++ -c
make: g++: Command not found
make: *** [cdhit-common.o] Error 127
What version of the product are you using? On what operating system?
cd-hit-v4.5.5-2011-03-31.tgz on a rhelv5 gnome 64bit
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 27 Jul 2011 at 9:48
i compiled it by "make openmp=yes" but cd-hit-est cannot accept "-T" with any n
from 0 to 64. when added, the program printed the help page then exit. I'm in
ubuntu 12.04 x86_64.
Original issue reported on code.google.com by [email protected]
on 7 Nov 2012 at 8:09
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.