Giter VIP home page Giter VIP logo

mbgc's People

Contributors

kowallus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

uarod

mbgc's Issues

OS X Bioconda package requires unnecessary llvm dependencies

One little issue I've noticed with the OS X Bioconda packages: it installs also

+ libcxx        16.0.6  hd57cbcb_0  conda-forge     Cached
+ llvm-openmp   17.0.6  hb6ac08f_0  conda-forge     Cached

I believe these two should be dependencies only for the build, but not for final bioconda package.

Support for simplitigs / spectrum-preserving string sets

Hello, is there currently any support in MBGC for simplitigs (spectrum-preserving string sets)?

The main conceptual difference in terms of file formats is that these are sets of strings so sequence names can be omitted (there're typically many sequences so storing headers is likely to be expensive).

I've tried to experiment with the MBGC capabilities in this regard – eg with the files from https://zenodo.org/records/5555253., But unlike genome assemblies, MBGC with simplitigs seems to have a worse performance than xz -9 -T1.

Multiple patterns, patterns file [suggestion]

Hi,

Is it possible with the current version to provide multiple patterns during the decompression step? In some cases, I need to uncompress hundred of strains from the same large archive. I can't practically multi-process all the extraction individually with the current method, since this leads to loading the archive in memory for each process. If this is not an option, I would like to make another suggestion, that is allow to provide a file in the decompression command line, that would contain a list of patterns.

Thanks!

MBGC on OS X

Hello, is it currently possible to run MBGC on OS X? I tested it some time ago when it clearly wasn't possible to compile the code, but now I see there's a new release.

While the method works really nicely on Linux, the lack of support for OS X has prevented us from embedding it eg directly into pipelines for phylogenetic compression, especially to the downstream of https://github.com/karel-brinda/mof-compress.

Memory consumption info

It would be great if there could be even rough estimates of the memory consumptions of the compressor and also decompressor.

Eg. roughly as a function of the number of k-mers.

Currently the only way to embed mbgc to any pipeline would be to first create some calibration files for estimating requirements and then using these eg for Snakemake resources.

See eg the xz documentation how memory cSeeonsumption is documented there.

Also, even if the consumption was difficult to estimate even based on st like "number of 31-mers", it still should be easy to estimate the expected memory for decompression (the compressor should have all the data for that).

See eg what type of info I can get for xz:

$ xz -l --verbose --verbose streptococcus_suis__01.tar.xz
streptococcus_suis__01.tar.xz (1/1)
  Streams:           1
  Blocks:            1
  Compressed size:   53.2 MiB (55,826,504 B)
  Uncompressed size: 3,393.4 MiB (3,558,195,200 B)
  Ratio:             0.016
  Check:             CRC64
  Stream Padding:    0 B
  Streams:
    Stream    Blocks      CompOffset    UncompOffset        CompSize      UncompSize  Ratio  Check      Padding
         1         1               0               0      55,826,504   3,558,195,200  0.016  CRC64            0
  Blocks:
    Stream     Block      CompOffset    UncompOffset       TotalSize      UncompSize  Ratio  Check      CheckVal          Header  Flags        CompSize    MemUsage  Filters
         1         1              12               0      55,826,464   3,558,195,200  0.016  CRC64      eafbd5f868f722ae      12  --         55,826,444      65 MiB  --lzma2=dict=64MiB
  Memory needed:     65 MiB
  Sizes in headers:  No
  Minimum XZ Utils version: 5.0.0

Gzip compression for decompressed files

Hi, I'd like to first say I'm really impressed by this tool, thanks. Do you think it would be easily possible to add a gzip compressor during the decompression step? That would help when an user uncompress only a subset of the archived genomes for a specific analysis.

Thanks!

Big differences in performance on the same Apple M1 computer across different OS X builds

This is just a quick remark regarding what I've been observing.

If I compare on OSX with MacBook M1 Pro the decompression times of the build from Bioconda (installed now) and my own (as discussed in #3), there're huuuuuge differences in the required times and the Bioconda x86 build is – in contrary to all my expectations – much faster than the arm build I've made.

The output files were identical.

It might be helpful to include in mbgc -v all the info about SIMD, target platform, OpenMP, etc. to allow debugging these.

Bioconda build

$ which mbgc
/Users/karel/miniconda/envs/mbgc/bin/mbgc

$ time mbgc d spn01.mbgc - > /dev/null 

real	0m14.912s
user	0m21.819s
sys	0m44.067s

My ARM build

$ which mbgc
/Users/karel/bin2/mbgc

$ time mbgc d spn01.mbgc - > /dev/null 

real	0m21.388s
user	2m25.816s
sys	0m1.580s

Both of them were, at each time, using roughly the same resources according to htop:
image

Feature request: repacking streamed `.tar` files (in the same order of files)

Currently it's challenging to repack files from other compressors. It would be extremely useful if MBGC was able to read eg a TAR file with FASTA files and create it's own archive exactly in the same order.

So eg when I phylogenetically compress something with MiniPhy using a standard and widely tested compressor such as xz (with checksums and all features like that), it would be extremely helpful to have some simple way how to create an MBGC archive out of that.

Process substitutions for input data yield errors

Minimal example:

$ mbgc c -i <(printf '>seq\nGGGGGGGGGGGGGGGGGGAAAAAAAAGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG\n') a.mbgc
Switching to single file compression mode.
Problem reading from file: /dev/fd/63 (read_bytes: 0 < size: 18446744073709551615; ferror code: 1; feof code: 0)

Does the number of threads impact compression ratio?

Hello, would it be possible to add a quick info into the documentation about how #threads impacts the resulting compression ratios?

For instance, for bacterial data ordered phylogenetically, the number of threads with xz has a major impact (going from 1 to several leads to several times higher archive sizes). I'd be extremely interested in whether the same happens with MBGC.

It would also be extremely helpful to know which parameters should be used for achieving maximum possible compression. Is it just mbgc c -m3 -i file.fa file.mbgc, or some additional parameters should be added?

Streaming issue: even if only 1 line is requested, mbgc seems to be decompressing everything

Example:

$ time mbgc d spn01.mbgc - | head -n1
>SAMEA1026141.contig00001 len=121641 cov=24.9 corr=3 origname=NODE_1_length_121641_cov_24.886229_pilon sw=shovill-spades/1.0.4 date=20181221

real	0m20.991s
user	2m25.819s
sys	0m1.609s

Probably because it tries to buffer everything somewhere and then print it at once.

This actually pretty much complicated any quick testing.

Note that eg xzcat works fine with head.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.