Giter VIP home page Giter VIP logo

deweylab / detonate Goto Github PK

View Code? Open in Web Editor NEW
11.0 5.0 4.0 20.08 MB

DETONATE: DE novo TranscriptOme rNa-seq Assembly with or without the Truth Evaluation

Home Page: http://deweylab.biostat.wisc.edu/detonate/

Makefile 0.38% Perl 0.69% Shell 0.66% C++ 84.47% C 9.55% CSS 0.01% HTML 0.77% Python 3.05% Assembly 0.03% Groff 0.15% Objective-C 0.01% IDL 0.01% QML 0.01% XSLT 0.01% TeX 0.02% Java 0.03% R 0.01% Lua 0.06% Batchfile 0.06% Yacc 0.05%

detonate's Introduction

        *** This file is autogenerated. Don't edit it directly. ***

DETONATE: DE novo TranscriptOme rNa-seq Assembly with or without the Truth
                                Evaluation

Overview

   DETONATE (DE novo TranscriptOme rNa-seq Assembly with or without the
   Truth Evaluation) consists of two component packages, RSEM-EVAL and
   REF-EVAL. Both packages are mainly intended to be used to evaluate de
   novo transcriptome assemblies, although REF-EVAL can be used to
   compare sets of any kinds of genomic sequences.

   RSEM-EVAL is a reference-free evaluation method based on a novel
   probabilistic model that depends only on an assembly and the RNA-Seq
   reads used for its construction. Unlike N50, RSEM-EVAL combines
   multiple factors, including the compactness of an assembly and the
   support of the assembly from the RNA-Seq data, into a single,
   statistically-principled evaluation score. This score can be used to
   select a best assembler, optimize an assembler's parameters, and
   guide new assembler design as an objective function. In addition, for
   each contig within an assembly, RSEM-EVAL provides a score that
   assesses how well that contig is supported by the RNA-Seq data and
   can be used to filter unnecessary contigs.

   REF-EVAL is a toolkit of reference-based measures, including contig,
   nucleotide, and pair precision, recall, and F1 scores, a novel kmer
   compression score, and several scores that compare induced kmer
   distributions between the assembly and the reference. REF-EVAL also
   includes a program to estimate the "true" assembly of a set of
   RNA-Seq reads, relative to a collection of full-length reference
   transcripts. See [1]here, or ref-eval/README in the distribution, for
   detailed information.

   DETONATE is motivated and described in detail in the following paper:

   Bo Li*, Nathanael Fillmore*, Yongsheng Bai, Mike Collins, James A.
   Thomson, Ron Stewart, and Colin N. Dewey. Evaluation of de novo
   transcriptome assemblies from RNA-Seq data. Genome Biology 2014,
   15:553. [2]link

   * = equal contributions

Downloading

   The current version of DETONATE (1.11) is available here:

     * [3]detonate-1.11.tar.gz

   As a convenience, we also provide a precompiled version, for Linux.
   This precompiled version may or may not work for you; if it doesn't,
   please try to compile the version above, instead.

     * [4]detonate-1.11-precompiled.tar.gz

Building/installation

   To build RSEM-EVAL and REF-EVAL, simply type "make" in the top-level
   detonate-1.11 directory, after you've upacked the tarball (tar xvf
   detonate-1.11.tar.gz). If you have trouble with any of this, please
   [5]contact us.

Vignette

   A vignette with an extended example of the basic usage of DETONATE is
   available [6]here, or in VIGNETTE in the distribution.

Usage

   For information about using RSEM-EVAL, see [7]here, or
   rsem-eval/README.md in the distribution.

   For information about using REF-EVAL, see [8]here, or ref-eval/README
   in the distribution.

   For information about using REF-EVAL-ESTIMATE-TRUE-ASSEMBLY, see
   [9]here, or ref-eval/README.REF-EVAL-ESTIMATE-TRUE-ASSEMBLY in the
   distribution.

Authors

   RSEM-EVAL was coded by Bo Li, and REF-EVAL was coded by Nathanael
   Fillmore. Bo, Nate, and Colin Dewey jointly developed the RSEM-EVAL
   and REF-EVAL methodology, with feedback on the methodology from the
   other coauthors.

Mailing list

   Subscribe to DETONATE Users
   Email: [10]_____________________ [11][ Subscribe ]
                                 [12]Visit this group

Development

   A repository for the DETONATE source code is available at Github,
   [13]here.

Axolotl assembly

   The RSEM-EVAL-guided Axolotl assembly described in the DETONATE paper
   (see above) is available [14]here.

Changelog

   In [15]DETONATE 1.11 (the current version), released on Jun 22, 2016,
   the following changes were made:

     * Reduced memory usage in RSEM-EVAL.
     * Changed build process to use the systemwide version of CMake,
       unless DETONATE is compiled with "NEED_CMAKE=yes make".
     * Restored missing "examples" directory in released version.

   In [16]DETONATE 1.10, released on Sep 1, 2015, the following changes
   were made:

     * Added an option to store single-precision numbers in the hash
       table used for computing KC and kmer scores, so as to use less
       memory.

   In [17]DETONATE 1.9, released on Mar 25, 2015, the following changes
   were made:

     * Canceled f function values from "Prior_score_on_contig_lengths"
       and "Correction_term".
     * Changed short to int to handle longer inferred insert sizes.

   In [18]DETONATE 1.8.1, released on Oct 2, 2014, the following changes
   were made:

     * Updated the toy example to match that in the paper.

   In [19]DETONATE 1.8, released on Sep 26, 2014, and comprised of
   RSEM-EVAL 1.8 and REF-EVAL 1.8, the following changes were made:

     * Added support for paired-end data.
     * Improved the interface.

   In [20]DETONATE 20140123, comprised of RSEM-EVAL 1.6 and REF-EVAL
   20140123, the following changes were made:

     * Fixed UI bug in REF-EVAL related to --alignment-policy.
     * Incorporated changes from RSEM 1.2.9 into RSEM-EVAL.

   [21]DETONATE 20140108, comprised of RSEM-EVAL 1.5 and REF-EVAL
   20140108 was the initial release.

References

   Visible links
   1. http://deweylab.biostat.wisc.edu/detonate/ref-eval.html
   2. http://genomebiology.com/2014/15/12/553
   3. http://deweylab.biostat.wisc.edu/detonate/detonate-1.11.tar.gz
   4. http://deweylab.biostat.wisc.edu/detonate/detonate-1.11-precompiled.tar.gz
   5. https://groups.google.com/d/forum/detonate-users
   6. http://deweylab.biostat.wisc.edu/detonate/vignette.html
   7. http://deweylab.biostat.wisc.edu/detonate/rsem-eval.html
   8. http://deweylab.biostat.wisc.edu/detonate/ref-eval.html
   9. http://deweylab.biostat.wisc.edu/detonate/ref-eval-estimate-true-assembly.html
  12. http://groups.google.com/group/detonate-users
  13. https://github.com/deweylab/detonate
  14. http://deweylab.biostat.wisc.edu/detonate/AxoJuveBlastema_RSEM-EVAL-1.4_Trinity.fasta.gz
  15. http://deweylab.biostat.wisc.edu/detonate/detonate-1.11.tar.gz
  16. http://deweylab.biostat.wisc.edu/detonate/detonate-1.10.tar.gz
  17. http://deweylab.biostat.wisc.edu/detonate/detonate-1.9.tar.gz
  18. http://deweylab.biostat.wisc.edu/detonate/detonate-1.8.1.tar.gz
  19. http://deweylab.biostat.wisc.edu/detonate/detonate-1.8.tar.gz
  20. http://deweylab.biostat.wisc.edu/detonate/detonate-20140123.tar.gz
  21. http://deweylab.biostat.wisc.edu/detonate/detonate-20140108.tar.gz

detonate's People

Contributors

bli25wisc avatar cndewey avatar nfillmore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

detonate's Issues

Not able to install detonate ? (make error)

not able to install, need suggestions!
Last few lines of make
make[1]: Leaving directory '/home/lip/soft4/evaluation/detonate-1.8.1/ref-eval'

============================================
= Building RSEM-EVAL and its dependencies. =

cd rsem-eval && make && touch finished
make[1]: Entering directory '/home/lip/soft4/evaluation/detonate-1.8.1/rsem-eval'
g++ -Wall -O3 extractRef.cpp -o rsem-extract-reference-transcripts
g++ -Wall -O3 synthesisRef.cpp -o rsem-synthesis-reference-transcripts
g++ -Wall -O3 -ffast-math -c -I. preRef.cpp
g++ preRef.o -o rsem-preref
g++ -Wall -O2 -c -I. parseIt.cpp
In file included from parseIt.cpp:22:0:
SingleHit.h: In member function ‘bool SingleHit::read(std::istream&)’:
SingleHit.h:46:22: error: cannot convert ‘std::basic_istream::__istream_type {aka std::basic_istream}’ to ‘bool’ in return
return (in>>sid>>pos);
^
In file included from parseIt.cpp:23:0:
PairedEndHit.h: In member function ‘bool PairedEndHit::read(std::istream&)’:
PairedEndHit.h:29:34: error: cannot convert ‘std::basic_istream::__istream_type {aka std::basic_istream}’ to ‘bool’ in return
return (in>>sid>>pos>>insertL);
^
Makefile:56: recipe for target 'parseIt.o' failed
make[1]: *** [parseIt.o] Error 1
make[1]: Leaving directory '/home/lip/soft4/evaluation/detonate-1.8.1/rsem-eval'
Makefile:13: recipe for target 'rsem-eval/finished' failed
make: *** [rsem-eval/finished] Error 2

Error: reads file does not look like a FASTQ file - failed! Plase check if you provide correct parameters/options for the pipeline!

Hello guys,

I have 2 (de novo) transcriptomes to which I'm trying to assess the assembly quality. My data is paired-end and strand-specific.

This is the command I'm trying to run:
nohup /home/gabriel/sw/DETONATE_v1.8.1/detonate/rsem-eval/rsem-eval-calculate-score -p 16 \ --transcript-length-parameters /home/gabriel/sw/DETONATE_v1.8.1/detonate/rsem-eval/true_transcript_length_distribution/human.txt \ --forward-prob 0 \ --paired-end B1_RMC13_Paired_1.fq.gz,B1_RMC14_Paired_1.fq.gz,B1_RMC15_Paired_1.fq.gz,B1_RMC16_Paired_1.fq.gz,B2_RMC21_Paired_1.fq.gz,B2_RMC22_Paired_1.fq.gz,B2_RMC23_Paired_1.fq.gz,B2_RMC24_Paired_1.fq.gz,B3_RMC31_Paired_1.fq.gz,B3_RMC32_Paired_1.fq.gz,B3_RMC33_Paired_1.fq.gz,B3_RMC34_Paired_1.fq.gz B1_RMC13_Paired_2.fq.gz,B1_RMC14_Paired_2.fq.gz,B1_RMC15_Paired_2.fq.gz,B1_RMC16_Paired_2.fq.gz,B2_RMC21_Paired_2.fq.gz,B2_RMC22_Paired_2.fq.gz,B2_RMC23_Paired_2.fq.gz,B2_RMC24_Paired_2.fq.gz,B3_RMC31_Paired_2.fq.gz,B3_RMC32_Paired_2.fq.gz,B3_RMC33_Paired_2.fq.gz,B3_RMC34_Paired_2.fq.gz /media/raid/raperez/transcriptomes151/data-rafaela/trinityOUT-RMC/Trinity.fasta RMCassembly_rsem_eval 150 > log_rsem_eval_RMC.txt 2>&1 &

And this is the error I'm getting:
bowtie -q --phred33-quals -n 2 -e 99999999 -l 25 -I 1 -X 1000 --nofw -p 16 -a -m 200 -S RMCassembly_rsem_eval.temp/RMCassembly_rsem_eval -1 B1_RMC13_Paired_1.fq.gz,B1_RMC14_Paired_1.fq.gz,B1_RMC15_Paired_1.fq.gz,B1_RMC16_Paired_1.fq.gz,B2_RMC21_Paired_1.fq.gz,B2_RMC22_Paired_1.fq.gz,B2_RMC23_Paired_1.fq.gz,B2_RMC24_Paired_1.fq.gz,B3_RMC31_Paired_1.fq.gz,B3_RMC32_Paired_1.fq.gz,B3_RMC33_Paired_1.fq.gz,B3_RMC34_Paired_1.fq.gz -2 B1_RMC13_Paired_2.fq.gz,B1_RMC14_Paired_2.fq.gz,B1_RMC15_Paired_2.fq.gz,B1_RMC16_Paired_2.fq.gz,B2_RMC21_Paired_2.fq.gz,B2_RMC22_Paired_2.fq.gz,B2_RMC23_Paired_2.fq.gz,B2_RMC24_Paired_2.fq.gz,B3_RMC31_Paired_2.fq.gz,B3_RMC32_Paired_2.fq.gz,B3_RMC33_Paired_2.fq.gz,B3_RMC34_Paired_2.fq.gz | samtools view -S -b -o RMCassembly_rsem_eval.temp/RMCassembly_rsem_eval.bam -
Error: reads file does not look like a FASTQ file
terminate called after throwing an instance of 'int'
Aborted (core dumped)
[samopen] SAM header is present: 3679486 sequences.
[sam_read1] reference 'ID:Bowtie VN:1.1.2 CL:"bowtie --wrapper basic-0 -q --phred33-quals -n 2 -e 99999999 -l 25 -I 1 -X 1000 --nofw -p 16 -a -m 200 -S RMCassembly_rsem_eval.temp/RMCassembly_rsem_eval -1 B1_RMC13_Paired_1.fq.gz,B1_RMC14_Paired_1.fq.gz,B1_RMC15_Paired_1.fq.gz,B1_RMC16_Paired_1.fq.gz,B2_RMC21_Paired_1.fq.gz,B2_RMC22_Paired_1.fq.gz,B2_RMC23_Paired_1.fq.gz,B2_RMC24_Paired_1.fq.gz,B3_RMC31_Paired_1.fq.gz,B3_RMC32_Paired_1.fq.gz,B3_RMC33_Paired_1.fq.gz,B3_RMC34_Paired_1.fq.gz -2 B1_RMC13_Paired_2.fq.gz,B1_RMC14_Paired_2.fq.gz,B1_RMC15_Paired_2.fq.gz,B1_RMC16_Paired_2.fq.gz,B2_RMC21_Paired_2.fq.gz,B2_RMC22_Paired_2.fq.gz,B2_RMC23_Paired_2.fq.gz,B2_RMC24_Paired_2.fq.gz,B3_RMC31_Paired_2.fq.gz,B3_RMC32_Paired_2.fq.gz,B3_RMC33_Paired_2.fq.gz,B3_RMC34_Paired_2.fq.gz"
DN0_c0_g1_i9 LN:515
@sq SN:TRINITY_DN0_c10_g1_i1 LN:235
@sq SN:TRINITY_DN0_c11_g1_i3 LN:543
@sq SN:TRINITY_DN0_c11_g1_i4 LN:1525
@sq SN:TRINITY_DN0_c11_g1_i5 LN:1390
@sq SN:TRINITY_DN0_c11_g1_i6 LN:542
@sq SN:TRINITY_DN0_c121_g1_i1 LN:213
@sq SN:TRINITY_DN0_c12_g1_i10 LN!' is recognized as '*'.
[main_samview] truncated file.
"bowtie -q --phred33-quals -n 2 -e 99999999 -l 25 -I 1 -X 1000 --nofw -p 16 -a -m 200 -S RMCassembly_rsem_eval.temp/RMCassembly_rsem_eval -1 B1_RMC13_Paired_1.fq.gz,B1_RMC14_Paired_1.fq.gz,B1_RMC15_Paired_1.fq.gz,B1_RMC16_Paired_1.fq.gz,B2_RMC21_Paired_1.fq.gz,B2_RMC22_Paired_1.fq.gz,B2_RMC23_Paired_1.fq.gz,B2_RMC24_Paired_1.fq.gz,B3_RMC31_Paired_1.fq.gz,B3_RMC32_Paired_1.fq.gz,B3_RMC33_Paired_1.fq.gz,B3_RMC34_Paired_1.fq.gz -2 B1_RMC13_Paired_2.fq.gz,B1_RMC14_Paired_2.fq.gz,B1_RMC15_Paired_2.fq.gz,B1_RMC16_Paired_2.fq.gz,B2_RMC21_Paired_2.fq.gz,B2_RMC22_Paired_2.fq.gz,B2_RMC23_Paired_2.fq.gz,B2_RMC24_Paired_2.fq.gz,B3_RMC31_Paired_2.fq.gz,B3_RMC32_Paired_2.fq.gz,B3_RMC33_Paired_2.fq.gz,B3_RMC34_Paired_2.fq.gz | samtools view -S -b -o RMCassembly_rsem_eval.temp/RMCassembly_rsem_eval.bam -" failed! Plase check if you provide correct parameters/options for the pipeline!

Based on all the digging I was able to do (specially here: https://groups.google.com/forum/#!forum/detonate-users) with the very limited answers I could find, I think that either there is a problem on the assembly step that could be caused by a faulty detonate installation (https://groups.google.com/forum/#!searchin/detonate-users/upstream$20read%7Csort:date/detonate-users/6dJWgs7VZ5k/c93Q5K2gFAAJ), OR, detonate cannot understand .fq.gz files as fastq files (which it does not sound very obvious...it does not make much sense to unzip all the reads files so they can be .fq so detonate can understand them as fastq files... I mean, .fq.gz are pretty standard - someone actually asked about it and never got a reply: https://groups.google.com/forum/#!searchin/detonate-users/fq.gz%7Csort:date/detonate-users/JtorPELgoys/7y6333g9AgAJ).

So please, if any of you have had this problem and could give me a hand on solving it, I would be very grateful.

more readable RSEM-EVAL scores?

Now Higher RSEM-EVAL scores are better than lower scores. This is true despite the fact that the scores are always negative. For example, a score of -80000 is better than a score of -200000, since -80000 > -200000.

But it is not easy to understand this value/use this value.

Can a percent value be provide such as 100% for perfect and 90% for good?

Error when trying to 'make' DETONATE

Hi,

After cloning the git repository, and attempting to 'make' DETONATE, I get the error in the attached picture.

Can you help me understand the error?

Thank you for your time.
detonate_Error

Recommended use for PE reads with varying insert sizes?

Dear all,

I would like to use RSEM-EVAL to evaluate the quality of several denovo transcriptome assemblies from Illumina PE reads. The sequencing libraries were prepared with a varying insert size, which is why I have been wondering if you would recommend

  1. using RSEM-EVAL with the option --paired-reads indicating the average insert size of the library OR
  2. running RSEM-EVAL as though the library was made up of single reads indicating the average read length.

Could you tell me which of the two is more likely to produce an accurate evaluation of my transcriptomes?

Cheers, Dario

What is the requirement for bam input of rsem-eval-calculate-score?

What is the requirement for bam input of rsem-eval-calculate-score?

In the following samples, the bam is generate by Trinity/${TRINITY_HOME}/util/align_and_estimate_abundance.pl --est_method RSEM --aln_method bowtie2.

and this bam file is sorted.

+ rsem-eval-estimate-transcript-length-distribution /biowrk/juglans.hybrid.mRNA/trinity/Transcriptome.fa human.txt
+ '[' 0 '!=' 0 ']'
+ rsem-eval-calculate-score -p 48 --transcript-length-parameters human.txt --bam --paired-end /biowrk/juglans.hybrid.mRNA/trinity.est/A0/A0-1/sort.bam /biowrk/juglans.hybrid.mRNA/trinity/Transcriptome.fa A0-1 180
rsem-synthesis-reference-transcripts A0-1.temp/A0-1 0 0 0 /biowrk/juglans.hybrid.mRNA/trinity/Transcriptome.fa
Transcript Information File is generated!
Group File is generated!
Extracted Sequences File is generated!

rsem-preref A0-1.temp/A0-1.transcripts.fa 1 A0-1.temp/A0-1
Refs.makeRefs finished!
Refs.saveRefs finished!
A0-1.temp/A0-1.idx.fa is generated!
A0-1.temp/A0-1.n2g.idx.fa is generated!

rsem-parse-alignments A0-1.temp/A0-1 A0-1.temp/A0-1 A0-1.stat/A0-1 b /biowrk/juglans.hybrid.mRNA/trinity.est/A0/A0-1/sort.bam -t 3 -tag XM
The SAM/BAM file declares more reference sequences (1197667) than RSEM knows (134092)!
"rsem-parse-alignments A0-1.temp/A0-1 A0-1.temp/A0-1 A0-1.stat/A0-1 b /biowrk/juglans.hybrid.mRNA/trinity.est/A0/A0-1/sort.bam -t 3 -tag XM" failed! Plase check if you provide correct parameters/options for the pipeline!
+ '[' 255 '!=' 0 ']'

Fail to create folder sample_name.temp

Hello all!

I'm trying to run rsem-eval-calculate-score from DETONATE but I'm getting this error:
Fail to create folder RMCassembly_rsem_eval.temp. > RMCassemby_rsem_eval is the name I chose for the sample_name argument that the software requires, so every output will have this name.

The command I'm using is:
nohup /home/gabriel/sw/DETONATE_v1.8.1/detonate/rsem-eval/rsem-eval-calculate-score -p 16 \ --transcript-length-parameters /media/raid/raperez/transcriptomes151/data-rafaela/detonate_RMC/RMC_rsem_eval.txt \ --strand-specific \ --paired-end B1_RMC13_Paired_1.fq.gz,B1_RMC14_Paired_1.fq.gz,B1_RMC15_Paired_1.fq.gz,B1_RMC16_Paired_1.fq.gz,B2_RMC21_Paired_1.fq.gz,B2_RMC22_Paired_1.fq.gz,B2_RMC23_Paired_1.fq.gz,B2_RMC24_Paired_1.fq.gz,B3_RMC31_Paired_1.fq.gz,B3_RMC32_Paired_1.fq.gz,B3_RMC33_Paired_1.fq.gz,B3_RMC34_Paired_1.fq.gz B1_RMC13_Paired_2.fq.gz,B1_RMC14_Paired_2.fq.gz,B1_RMC15_Paired_2.fq.gz,B1_RMC16_Paired_2.fq.gz,B2_RMC21_Paired_2.fq.gz,B2_RMC22_Paired_2.fq.gz,B2_RMC23_Paired_2.fq.gz,B2_RMC24_Paired_2.fq.gz,B3_RMC31_Paired_2.fq.gz,B3_RMC32_Paired_2.fq.gz,B3_RMC33_Paired_2.fq.gz,B3_RMC34_Paired_2.fq.gz /media/raid/raperez/transcriptomes151/data-rafaela/trinityOUT-RMC/Trinity.fasta RMCassembly_rsem_eval 110 &

Does anyone have had the same problem before? I really don't know why it is getting stuck with this error.

Error with rsem-eval-calculate-score

Good afternoon,

We are trying to run the this code:

rsem-eval-calculate-score --paired-end --strand-specific
--transcript-length-parameters /data/gent/vo/001/gvo00183/vsc46245/Transcriptoma/MQ/post-assembly-quality/detonate/Trinity.txt
-p 10 /data/gent/vo/001/gvo00183/vsc46245/Transcriptoma/MQ/Clean/uncseqs_1.fq.gz /data/gent/vo/001/gvo00183/vsc46245/Transcriptoma/MQ/Clean/uncseqs_2.fq.gz
$VSC_DATA_VO/vsc46245/Transcriptoma/MQ/Montaje/Trinity/Trinity.Trinity.fasta Trinity_rsemscore 85

But in the last step of the process we have this error message.

rsem-synthesis-reference-transcripts Trinity_rsemscore.temp/Trinity_rsemscore 0 0 0 /data/gent/vo/001/gvo00183/vsc46245/Transcriptoma/MQ/Montaje/Trinity/Trinity.Trinity.fasta
Transcript Information File is generated!
Group File is generated!
Extracted Sequences File is generated!

rsem-preref Trinity_rsemscore.temp/Trinity_rsemscore.transcripts.fa 1 Trinity_rsemscore.temp/Trinity_rsemscore
Refs.makeRefs finished!
Refs.saveRefs finished!
Trinity_rsemscore.temp/Trinity_rsemscore.idx.fa is generated!
Trinity_rsemscore.temp/Trinity_rsemscore.n2g.idx.fa is generated!

bowtie-build -f Trinity_rsemscore.temp/Trinity_rsemscore.n2g.idx.fa Trinity_rsemscore.temp/Trinity_rsemscore
Settings:
Output files: "Trinity_rsemscore.temp/Trinity_rsemscore..ebwt"
Line rate: 6 (line is 64 bytes)
Lines per side: 1 (side is 64 bytes)
Offset rate: 5 (one in 32)
FTable chars: 10
Strings: unpacked
Max bucket size: default
Max bucket size, sqrt multiplier: default
Max bucket size, len divisor: 4
Difference-cover sample period: 1024
Endianness: little
Actual local endianness: little
Sanity checking: disabled
Assertions: disabled
Random seed: 0
Sizeofs: void
:8, int:4, long:8, size_t:8
Input files DNA, FASTA:
Trinity_rsemscore.temp/Trinity_rsemscore.n2g.idx.fa
Reading reference sizes
Time reading reference sizes: 00:00:01
Calculating joined length
Writing header
Reserving space for joined string
Joining reference sequences
Time to join reference sequences: 00:00:00
bmax according to bmaxDivN setting: 26039960
Using parameters --bmax 19529970 --dcv 1024
Doing ahead-of-time memory usage test
Passed! Constructing with these parameters: --bmax 19529970 --dcv 1024
Constructing suffix-array element generator
Building DifferenceCoverSample
Building sPrime
Building sPrimeOrder
V-Sorting samples
V-Sorting samples time: 00:00:02
Allocating rank array
Ranking v-sort output
Ranking v-sort output time: 00:00:01
Invoking Larsson-Sadakane on ranks
Invoking Larsson-Sadakane on ranks time: 00:00:00
Sanity-checking and returning
Building samples
Reserving space for 12 sample suffixes
Generating random suffixes
QSorting 12 sample offsets, eliminating duplicates
Multikey QSorting 12 samples
(Using difference cover)
Multikey QSorting samples time: 00:00:00
QSorting sample offsets, eliminating duplicates time: 00:00:00
Calculating bucket sizes
Splitting and merging
Splitting and merging time: 00:00:00
Avg bucket size: 1.488e+07 (target: 19529969)
Converting suffix-array elements to index image
Allocating ftab, absorbFtab
Entering Ebwt loop
Getting block 1 of 7
Reserving size (19529970) for bucket 1
Calculating Z arrays for bucket 1
Entering block accumulator loop for bucket 1:
bucket 1: 10%
bucket 1: 20%
bucket 1: 30%
bucket 1: 40%
bucket 1: 50%
bucket 1: 60%
bucket 1: 70%
bucket 1: 80%
bucket 1: 90%
bucket 1: 100%
Sorting block of length 14690167 for bucket 1
(Using difference cover)
Sorting block time: 00:00:15
Returning block of 14690168 for bucket 1
Getting block 2 of 7
Reserving size (19529970) for bucket 2
Calculating Z arrays for bucket 2
Entering block accumulator loop for bucket 2:
bucket 2: 10%
bucket 2: 20%
bucket 2: 30%
bucket 2: 40%
bucket 2: 50%
bucket 2: 60%
bucket 2: 70%
bucket 2: 80%
bucket 2: 90%
bucket 2: 100%
Sorting block of length 16271737 for bucket 2
(Using difference cover)
Sorting block time: 00:00:17
Returning block of 16271738 for bucket 2
Getting block 3 of 7
Reserving size (19529970) for bucket 3
Calculating Z arrays for bucket 3
Entering block accumulator loop for bucket 3:
bucket 3: 10%
bucket 3: 20%
bucket 3: 30%
bucket 3: 40%
bucket 3: 50%
bucket 3: 60%
bucket 3: 70%
bucket 3: 80%
bucket 3: 90%
bucket 3: 100%
Sorting block of length 17922227 for bucket 3
(Using difference cover)
Sorting block time: 00:00:18
Returning block of 17922228 for bucket 3
Getting block 4 of 7
Reserving size (19529970) for bucket 4
Calculating Z arrays for bucket 4
Entering block accumulator loop for bucket 4:
bucket 4: 10%
bucket 4: 20%
bucket 4: 30%
bucket 4: 40%
bucket 4: 50%
bucket 4: 60%
bucket 4: 70%
bucket 4: 80%
bucket 4: 90%
bucket 4: 100%
Sorting block of length 10911253 for bucket 4
(Using difference cover)
Sorting block time: 00:00:11
Returning block of 10911254 for bucket 4
Getting block 5 of 7
Reserving size (19529970) for bucket 5
Calculating Z arrays for bucket 5
Entering block accumulator loop for bucket 5:
bucket 5: 10%
bucket 5: 20%
bucket 5: 30%
bucket 5: 40%
bucket 5: 50%
bucket 5: 60%
bucket 5: 70%
bucket 5: 80%
bucket 5: 90%
bucket 5: 100%
Sorting block of length 9754624 for bucket 5
(Using difference cover)
Sorting block time: 00:00:10
Returning block of 9754625 for bucket 5
Getting block 6 of 7
Reserving size (19529970) for bucket 6
Calculating Z arrays for bucket 6
Entering block accumulator loop for bucket 6:
bucket 6: 10%
bucket 6: 20%
bucket 6: 30%
bucket 6: 40%
bucket 6: 50%
bucket 6: 60%
bucket 6: 70%
bucket 6: 80%
bucket 6: 90%
bucket 6: 100%
Sorting block of length 15619445 for bucket 6
(Using difference cover)
Sorting block time: 00:00:17
Returning block of 15619446 for bucket 6
Getting block 7 of 7
Reserving size (19529970) for bucket 7
Calculating Z arrays for bucket 7
Entering block accumulator loop for bucket 7:
bucket 7: 10%
bucket 7: 20%
bucket 7: 30%
bucket 7: 40%
bucket 7: 50%
bucket 7: 60%
bucket 7: 70%
bucket 7: 80%
bucket 7: 90%
bucket 7: 100%
Sorting block of length 18990381 for bucket 7
(Using difference cover)
Sorting block time: 00:00:20
Returning block of 18990382 for bucket 7
Exited Ebwt loop
fchr[A]: 0
fchr[C]: 30168995
fchr[G]: 51317065
fchr[T]: 70430913
fchr[$]: 104159840
Exiting Ebwt::buildToDisk()
Returning from initFromVector
Wrote 37356843 bytes to primary EBWT file: Trinity_rsemscore.temp/Trinity_rsemscore.1.ebwt
Wrote 13019988 bytes to secondary EBWT file: Trinity_rsemscore.temp/Trinity_rsemscore.2.ebwt
Re-opening _in1 and _in2 as input streams
Returning from Ebwt constructor
Headers:
len: 104159840
bwtLen: 104159841
sz: 26039960
bwtSz: 26039961
lineRate: 6
linesPerSide: 1
offRate: 5
offMask: 0xffffffe0
isaRate: -1
isaMask: 0xffffffff
ftabChars: 10
eftabLen: 20
eftabSz: 80
ftabLen: 1048577
ftabSz: 4194308
offsLen: 3254996
offsSz: 13019984
isaLen: 0
isaSz: 0
lineSz: 64
sideSz: 64
sideBwtSz: 56
sideBwtLen: 224
numSidePairs: 232500
numSides: 465000
numLines: 465000
ebwtTotLen: 29760000
ebwtTotSz: 29760000
reverse: 0
Total time for call to driver() for forward index: 00:02:08
Reading reference sizes
Time reading reference sizes: 00:00:01
Calculating joined length
Writing header
Reserving space for joined string
Joining reference sequences
Time to join reference sequences: 00:00:00
bmax according to bmaxDivN setting: 26039960
Using parameters --bmax 19529970 --dcv 1024
Doing ahead-of-time memory usage test
Passed! Constructing with these parameters: --bmax 19529970 --dcv 1024
Constructing suffix-array element generator
Building DifferenceCoverSample
Building sPrime
Building sPrimeOrder
V-Sorting samples
V-Sorting samples time: 00:00:03
Allocating rank array
Ranking v-sort output
Ranking v-sort output time: 00:00:00
Invoking Larsson-Sadakane on ranks
Invoking Larsson-Sadakane on ranks time: 00:00:01
Sanity-checking and returning
Building samples
Reserving space for 12 sample suffixes
Generating random suffixes
QSorting 12 sample offsets, eliminating duplicates
Multikey QSorting 12 samples
(Using difference cover)
Multikey QSorting samples time: 00:00:00
QSorting sample offsets, eliminating duplicates time: 00:00:00
Calculating bucket sizes
Splitting and merging
Splitting and merging time: 00:00:00
Split 1, merged 7; iterating...
Splitting and merging
Splitting and merging time: 00:00:00
Split 1, merged 0; iterating...
Splitting and merging
Splitting and merging time: 00:00:00
Split 1, merged 1; iterating...
Splitting and merging
Splitting and merging time: 00:00:00
Split 1, merged 1; iterating...
Splitting and merging
Splitting and merging time: 00:00:00
Avg bucket size: 1.302e+07 (target: 19529969)
Converting suffix-array elements to index image
Allocating ftab, absorbFtab
Entering Ebwt loop
Getting block 1 of 8
Reserving size (19529970) for bucket 1
Calculating Z arrays for bucket 1
Entering block accumulator loop for bucket 1:
bucket 1: 10%
bucket 1: 20%
bucket 1: 30%
bucket 1: 40%
bucket 1: 50%
bucket 1: 60%
bucket 1: 70%
bucket 1: 80%
bucket 1: 90%
bucket 1: 100%
Sorting block of length 15391148 for bucket 1
(Using difference cover)
Sorting block time: 00:00:17
Returning block of 15391149 for bucket 1
Getting block 2 of 8
Reserving size (19529970) for bucket 2
Calculating Z arrays for bucket 2
Entering block accumulator loop for bucket 2:
bucket 2: 10%
bucket 2: 20%
bucket 2: 30%
bucket 2: 40%
bucket 2: 50%
bucket 2: 60%
bucket 2: 70%
bucket 2: 80%
bucket 2: 90%
bucket 2: 100%
Sorting block of length 12646103 for bucket 2
(Using difference cover)
Sorting block time: 00:00:13
Returning block of 12646104 for bucket 2
Getting block 3 of 8
Reserving size (19529970) for bucket 3
Calculating Z arrays for bucket 3
Entering block accumulator loop for bucket 3:
bucket 3: 10%
bucket 3: 20%
bucket 3: 30%
bucket 3: 40%
bucket 3: 50%
bucket 3: 60%
bucket 3: 70%
bucket 3: 80%
bucket 3: 90%
bucket 3: 100%
Sorting block of length 14518306 for bucket 3
(Using difference cover)
Sorting block time: 00:00:15
Returning block of 14518307 for bucket 3
Getting block 4 of 8
Reserving size (19529970) for bucket 4
Calculating Z arrays for bucket 4
Entering block accumulator loop for bucket 4:
bucket 4: 10%
bucket 4: 20%
bucket 4: 30%
bucket 4: 40%
bucket 4: 50%
bucket 4: 60%
bucket 4: 70%
bucket 4: 80%
bucket 4: 90%
bucket 4: 100%
Sorting block of length 8971375 for bucket 4
(Using difference cover)
Sorting block time: 00:00:09
Returning block of 8971376 for bucket 4
Getting block 5 of 8
Reserving size (19529970) for bucket 5
Calculating Z arrays for bucket 5
Entering block accumulator loop for bucket 5:
bucket 5: 10%
bucket 5: 20%
bucket 5: 30%
bucket 5: 40%
bucket 5: 50%
bucket 5: 60%
bucket 5: 70%
bucket 5: 80%
bucket 5: 90%
bucket 5: 100%
Sorting block of length 12997022 for bucket 5
(Using difference cover)
Sorting block time: 00:00:14
Returning block of 12997023 for bucket 5
Getting block 6 of 8
Reserving size (19529970) for bucket 6
Calculating Z arrays for bucket 6
Entering block accumulator loop for bucket 6:
bucket 6: 10%
bucket 6: 20%
bucket 6: 30%
bucket 6: 40%
bucket 6: 50%
bucket 6: 60%
bucket 6: 70%
bucket 6: 80%
bucket 6: 90%
bucket 6: 100%
Sorting block of length 7346814 for bucket 6
(Using difference cover)
Sorting block time: 00:00:08
Returning block of 7346815 for bucket 6
Getting block 7 of 8
Reserving size (19529970) for bucket 7
Calculating Z arrays for bucket 7
Entering block accumulator loop for bucket 7:
bucket 7: 10%
bucket 7: 20%
bucket 7: 30%
bucket 7: 40%
bucket 7: 50%
bucket 7: 60%
bucket 7: 70%
bucket 7: 80%
bucket 7: 90%
bucket 7: 100%
Sorting block of length 14711036 for bucket 7
(Using difference cover)
Sorting block time: 00:00:16
Returning block of 14711037 for bucket 7
Getting block 8 of 8
Reserving size (19529970) for bucket 8
Calculating Z arrays for bucket 8
Entering block accumulator loop for bucket 8:
bucket 8: 10%
bucket 8: 20%
bucket 8: 30%
bucket 8: 40%
bucket 8: 50%
bucket 8: 60%
bucket 8: 70%
bucket 8: 80%
bucket 8: 90%
bucket 8: 100%
Sorting block of length 17578029 for bucket 8
(Using difference cover)
Sorting block time: 00:00:19
Returning block of 17578030 for bucket 8
Exited Ebwt loop
fchr[A]: 0
fchr[C]: 30168995
fchr[G]: 51317065
fchr[T]: 70430913
fchr[$]: 104159840
Exiting Ebwt::buildToDisk()
Returning from initFromVector
Wrote 37356843 bytes to primary EBWT file: Trinity_rsemscore.temp/Trinity_rsemscore.rev.1.ebwt
Wrote 13019988 bytes to secondary EBWT file: Trinity_rsemscore.temp/Trinity_rsemscore.rev.2.ebwt
Re-opening _in1 and _in2 as input streams
Returning from Ebwt constructor
Headers:
len: 104159840
bwtLen: 104159841
sz: 26039960
bwtSz: 26039961
lineRate: 6
linesPerSide: 1
offRate: 5
offMask: 0xffffffe0
isaRate: -1
isaMask: 0xffffffff
ftabChars: 10
eftabLen: 20
eftabSz: 80
ftabLen: 1048577
ftabSz: 4194308
offsLen: 3254996
offsSz: 13019984
isaLen: 0
isaSz: 0
lineSz: 64
sideSz: 64
sideBwtSz: 56
sideBwtLen: 224
numSidePairs: 232500
numSides: 465000
numLines: 465000
ebwtTotLen: 29760000
ebwtTotSz: 29760000
reverse: 0
Total time for backward call to driver() for mirror index: 00:02:22

bowtie -q --phred33-quals -n 2 -e 99999999 -l 25 -I 1 -X 1000 --norc -p 10 -a -m 200 -S Trinity_rsemscore.temp/Trinity_rsemscore -1 /data/gent/vo/001/gvo00183/vsc46245/Transcriptoma/MQ/Clean/uncseqs_1.fq.gz -2 /data/gent/vo/001/gvo00183/vsc46245/Transcriptoma/MQ/Clean/uncseqs_2.fq.gz | samtools view -S -b -o Trinity_rsemscore.temp/Trinity_rsemscore.bam -

rsem-parse-alignments Trinity_rsemscore.temp/Trinity_rsemscore Trinity_rsemscore.temp/Trinity_rsemscore Trinity_rsemscore.stat/Trinity_rsemscore b Trinity_rsemscore.temp/Trinity_rsemscore.bam -t 3 -tag XM
Parsed 1000000 entries
Parsed 2000000 entries
Parsed 3000000 entries
Parsed 4000000 entries
Parsed 5000000 entries
Parsed 6000000 entries
Parsed 7000000 entries
Parsed 8000000 entries
Parsed 9000000 entries
Parsed 10000000 entries
Parsed 11000000 entries
Parsed 12000000 entries
Parsed 13000000 entries
Parsed 14000000 entries
Parsed 15000000 entries
Parsed 16000000 entries
Parsed 17000000 entries
Parsed 18000000 entries
Parsed 19000000 entries
Parsed 20000000 entries
Parsed 21000000 entries
Parsed 22000000 entries
Parsed 23000000 entries
Parsed 24000000 entries
Parsed 25000000 entries
Parsed 26000000 entries
Parsed 27000000 entries
Parsed 28000000 entries
Parsed 29000000 entries
Parsed 30000000 entries
Parsed 31000000 entries
Parsed 32000000 entries
Parsed 33000000 entries
Parsed 34000000 entries
Parsed 35000000 entries
Parsed 36000000 entries
Parsed 37000000 entries
Parsed 38000000 entries
Parsed 39000000 entries
Parsed 40000000 entries
Parsed 41000000 entries
Parsed 42000000 entries
Parsed 43000000 entries
Parsed 44000000 entries
Parsed 45000000 entries
Parsed 46000000 entries
Parsed 47000000 entries
Parsed 48000000 entries
Parsed 49000000 entries
Parsed 50000000 entries
Parsed 51000000 entries
Parsed 52000000 entries
Parsed 53000000 entries
Parsed 54000000 entries
Parsed 55000000 entries
Parsed 56000000 entries
Parsed 57000000 entries
Parsed 58000000 entries
Parsed 59000000 entries
Parsed 60000000 entries
Parsed 61000000 entries
Parsed 62000000 entries
Parsed 63000000 entries
Parsed 64000000 entries
Parsed 65000000 entries
Parsed 66000000 entries
Parsed 67000000 entries
Parsed 68000000 entries
Parsed 69000000 entries
Parsed 70000000 entries
Parsed 71000000 entries
Parsed 72000000 entries
Parsed 73000000 entries
Parsed 74000000 entries
Parsed 75000000 entries
Parsed 76000000 entries
Parsed 77000000 entries
Parsed 78000000 entries
Parsed 79000000 entries
Parsed 80000000 entries
Parsed 81000000 entries
Parsed 82000000 entries
Parsed 83000000 entries
Parsed 84000000 entries
Parsed 85000000 entries
Parsed 86000000 entries
Parsed 87000000 entries
Parsed 88000000 entries
Parsed 89000000 entries
Parsed 90000000 entries
Parsed 91000000 entries
Parsed 92000000 entries
Parsed 93000000 entries
Parsed 94000000 entries
Parsed 95000000 entries
Parsed 96000000 entries
Parsed 97000000 entries
Parsed 98000000 entries
Parsed 99000000 entries
Parsed 100000000 entries
Parsed 101000000 entries
Parsed 102000000 entries
Parsed 103000000 entries
Parsed 104000000 entries
Parsed 105000000 entries
Parsed 106000000 entries
Parsed 107000000 entries
Parsed 108000000 entries
Parsed 109000000 entries
Parsed 110000000 entries
Parsed 111000000 entries
Parsed 112000000 entries
Parsed 113000000 entries
Parsed 114000000 entries
Parsed 115000000 entries
Parsed 116000000 entries
Parsed 117000000 entries
Parsed 118000000 entries
Parsed 119000000 entries
Parsed 120000000 entries
Parsed 121000000 entries
Parsed 122000000 entries
Parsed 123000000 entries
Parsed 124000000 entries
Parsed 125000000 entries
Parsed 126000000 entries
Parsed 127000000 entries
Parsed 128000000 entries
Parsed 129000000 entries
Parsed 130000000 entries
Parsed 131000000 entries
Parsed 132000000 entries
Parsed 133000000 entries
Parsed 134000000 entries
Parsed 135000000 entries
Parsed 136000000 entries
Parsed 137000000 entries
Parsed 138000000 entries
Parsed 139000000 entries
Parsed 140000000 entries
Parsed 141000000 entries
Parsed 142000000 entries
Parsed 143000000 entries
Parsed 144000000 entries
Parsed 145000000 entries
Parsed 146000000 entries
Parsed 147000000 entries
Parsed 148000000 entries
Parsed 149000000 entries
Parsed 150000000 entries
Parsed 151000000 entries
Parsed 152000000 entries
Parsed 153000000 entries
Parsed 154000000 entries
Parsed 155000000 entries
Parsed 156000000 entries
Parsed 157000000 entries
Parsed 158000000 entries
Parsed 159000000 entries
Parsed 160000000 entries
Parsed 161000000 entries
Parsed 162000000 entries
Parsed 163000000 entries
Parsed 164000000 entries
Parsed 165000000 entries
Parsed 166000000 entries
Parsed 167000000 entries
Parsed 168000000 entries
Parsed 169000000 entries
Parsed 170000000 entries
Parsed 171000000 entries
Parsed 172000000 entries
Parsed 173000000 entries
Parsed 174000000 entries
Parsed 175000000 entries
Parsed 176000000 entries
Parsed 177000000 entries
Parsed 178000000 entries
Parsed 179000000 entries
Parsed 180000000 entries
Parsed 181000000 entries
Parsed 182000000 entries
Parsed 183000000 entries
Parsed 184000000 entries
Parsed 185000000 entries
Parsed 186000000 entries
Parsed 187000000 entries
Parsed 188000000 entries
Parsed 189000000 entries
Parsed 190000000 entries
Parsed 191000000 entries
Parsed 192000000 entries
Parsed 193000000 entries
Parsed 194000000 entries
Parsed 195000000 entries
Parsed 196000000 entries
Parsed 197000000 entries
Parsed 198000000 entries
Parsed 199000000 entries
Parsed 200000000 entries
Parsed 201000000 entries
Parsed 202000000 entries
Parsed 203000000 entries
Parsed 204000000 entries
Parsed 205000000 entries
Parsed 206000000 entries
Parsed 207000000 entries
Parsed 208000000 entries
Parsed 209000000 entries
Parsed 210000000 entries
Parsed 211000000 entries
Parsed 212000000 entries
Parsed 213000000 entries
Parsed 214000000 entries
Parsed 215000000 entries
Parsed 216000000 entries
Parsed 217000000 entries
Parsed 218000000 entries
Parsed 219000000 entries
Parsed 220000000 entries
Parsed 221000000 entries
Parsed 222000000 entries
Parsed 223000000 entries
Parsed 224000000 entries
Parsed 225000000 entries
Parsed 226000000 entries
Parsed 227000000 entries
Parsed 228000000 entries
Parsed 229000000 entries
Parsed 230000000 entries
Parsed 231000000 entries
Parsed 232000000 entries
Parsed 233000000 entries
Parsed 234000000 entries
Parsed 235000000 entries
Parsed 236000000 entries
Parsed 237000000 entries
Parsed 238000000 entries
Parsed 239000000 entries
Parsed 240000000 entries
Parsed 241000000 entries
Parsed 242000000 entries
Parsed 243000000 entries
Parsed 244000000 entries
Parsed 245000000 entries
Parsed 246000000 entries
Parsed 247000000 entries
Parsed 248000000 entries
Parsed 249000000 entries
Parsed 250000000 entries
Parsed 251000000 entries
Parsed 252000000 entries
Parsed 253000000 entries
Parsed 254000000 entries
Done!

rsem-build-read-index 32 1 0 Trinity_rsemscore.temp/Trinity_rsemscore_alignable_1.fq Trinity_rsemscore.temp/Trinity_rsemscore_alignable_2.fq
FIN 1000000
FIN 2000000
FIN 3000000
FIN 4000000
FIN 5000000
FIN 6000000
FIN 7000000
FIN 8000000
FIN 9000000
FIN 10000000
FIN 11000000
FIN 12000000
FIN 13000000
FIN 14000000
FIN 15000000
FIN 16000000
FIN 17000000
FIN 18000000
FIN 19000000
FIN 20000000
FIN 21000000
FIN 22000000
FIN 23000000
FIN 24000000
FIN 25000000
FIN 26000000
FIN 27000000
FIN 28000000
FIN 29000000
FIN 30000000
FIN 31000000
FIN 32000000
FIN 33000000
FIN 34000000
FIN 35000000
FIN 36000000
FIN 37000000
FIN 38000000
FIN 39000000
FIN 40000000
FIN 41000000
FIN 42000000
FIN 43000000
FIN 44000000
FIN 45000000
FIN 46000000
FIN 47000000
FIN 48000000
FIN 49000000
FIN 50000000
FIN 51000000
FIN 52000000
FIN 53000000
FIN 54000000
FIN 55000000
FIN 56000000
FIN 57000000
FIN 58000000
FIN 59000000
FIN 60000000
FIN 61000000
FIN 62000000
FIN 63000000
FIN 64000000
FIN 65000000
FIN 66000000
FIN 67000000
FIN 68000000
FIN 69000000
FIN 70000000
FIN 71000000
FIN 72000000
FIN 73000000
FIN 74000000
FIN 75000000
FIN 76000000
FIN 77000000
FIN 78000000
FIN 79000000
FIN 80000000
FIN 81000000
FIN 82000000
Build Index Trinity_rsemscore.temp/Trinity_rsemscore_alignable_1.fq is Done!
FIN 1000000
FIN 2000000
FIN 3000000
FIN 4000000
FIN 5000000
FIN 6000000
FIN 7000000
FIN 8000000
FIN 9000000
FIN 10000000
FIN 11000000
FIN 12000000
FIN 13000000
FIN 14000000
FIN 15000000
FIN 16000000
FIN 17000000
FIN 18000000
FIN 19000000
FIN 20000000
FIN 21000000
FIN 22000000
FIN 23000000
FIN 24000000
FIN 25000000
FIN 26000000
FIN 27000000
FIN 28000000
FIN 29000000
FIN 30000000
FIN 31000000
FIN 32000000
FIN 33000000
FIN 34000000
FIN 35000000
FIN 36000000
FIN 37000000
FIN 38000000
FIN 39000000
FIN 40000000
FIN 41000000
FIN 42000000
FIN 43000000
FIN 44000000
FIN 45000000
FIN 46000000
FIN 47000000
FIN 48000000
FIN 49000000
FIN 50000000
FIN 51000000
FIN 52000000
FIN 53000000
FIN 54000000
FIN 55000000
FIN 56000000
FIN 57000000
FIN 58000000
FIN 59000000
FIN 60000000
FIN 61000000
FIN 62000000
FIN 63000000
FIN 64000000
FIN 65000000
FIN 66000000
FIN 67000000
FIN 68000000
FIN 69000000
FIN 70000000
FIN 71000000
FIN 72000000
FIN 73000000
FIN 74000000
FIN 75000000
FIN 76000000
FIN 77000000
FIN 78000000
FIN 79000000
FIN 80000000
FIN 81000000
FIN 82000000
Build Index Trinity_rsemscore.temp/Trinity_rsemscore_alignable_2.fq is Done!

rsem-eval-run-em Trinity_rsemscore.temp/Trinity_rsemscore 3 Trinity_rsemscore Trinity_rsemscore.temp/Trinity_rsemscore Trinity_rsemscore.stat/Trinity_rsemscore 1.22356119436796 0.000986510309118638 85 0 -p 10
Refs.loadRefs finished!
Thread 0 : N = 1961, NHit = 24779910
Thread 1 : N = 1928, NHit = 24798784
"rsem-eval-run-em Trinity_rsemscore.temp/Trinity_rsemscore 3 Trinity_rsemscore Trinity_rsemscore.temp/Trinity_rsemscore Trinity_rsemscore.stat/Trinity_rsemscore 1.22356119436796 0.000986510309118638 85 0 -p 10" failed! Plase check if you provide correct parameters/options for the pipeline!

Thank´s for the support.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.