Giter VIP home page Giter VIP logo

swarm's Introduction

swarm

A robust and fast clustering method for amplicon-based studies. This is an experimental python version of swarm. The objective of that version is to identify the fastest way to perform the clustering when the number of differences d between amplicons is set to 1. That d parameters is set to 1 by default in the official swarm version, as it yields high-resolution clustering results, and as datasets grow in size, using d = 1 is arguably the best decision.

With current clustering algorithms (including swarm), computation complexity is quadratic. Multiplying by 10 the size of the dataset increases by a factor 100 the computation time.

In the approach taken by swarm, using a fixed d = 1 value allows a radical change in the algorithm design (while of course remaining exact). That new algorithm, implemented in the python script presented here, has a fantastic property: it has a linear computation complexity. Increasing the dataset 10 times only increases the computation time by a factor of 10. A major change in scalability.

To give some perspective, the monothreaded python script is already as fast as the C++ multithreaded "vanilla" implementation of swarm on mid-size amplicon datasets (appr. 1 million unique amplicons). The python script is more than 10 times faster on large amplicon datasets (32 million unique amplicons). On an extremely large dataset (154 million unique amplicons!), the script takes only 6 days to run, where the C++ vanilla implementation of swarm would take several months on a 16-cores computer.

Early tests with a C version of the key-function of the new algorithm show a 10 times speed-up. We can confidently estimate that a smart C/C++ re-implementation of the algorithm will bring another very significant speed-up.

In conclusion, with that new swarm algorithm, fast, exact and accurate partitioning of extremely large datasets, intractable with current clustering methods suddenly becomes an easy computation task.

Warning

Tested with python 2.7.3

Quick start

To get basic usage and help, use the following command:

python swarm.py -h

swarm's People

Contributors

frederic-mahe avatar

Stargazers

Jiří Novák avatar Miao Li avatar  avatar Antonio Rodriguez Franco avatar Dr. Marion Couëdel avatar  avatar Nicolas Henry avatar Anna MacDonald avatar Jon Hestetun avatar  avatar Claudio Quezada-Romegialli avatar Physilia avatar Debbie Leung avatar Ryan Moore avatar Pierre-Edouard Guerin avatar Kimmo S avatar Tania Valdivia Carrillo avatar Stephane Plaisance avatar Federico López-Osorio avatar Shareef Dabdoub avatar laur avatar Eric Normandeau avatar  avatar Jakob Admard avatar Ben avatar Marcin Kierczak avatar Adrien Taudiere avatar Yang Liu avatar  avatar Jessica Hardwicke avatar Torbjørn Rognes avatar

Watchers

James Cloos avatar Stephane Plaisance avatar Torbjørn Rognes avatar Pengfei Liu avatar  avatar Greg Gavelis avatar  avatar  avatar

swarm's Issues

TMP_FASTQ, TMP_FASTQ2

hey,
I've noticed some inconsistency in Fred's metabarcoding pipeline

    # Discard sequences containing Ns, add expected error rates
    "${VSEARCH}" \
        --quiet \
        --fastq_filter "${TMP_FASTQ}" \
        --fastq_maxns 0 \
        --relabel_sha1 \
        --eeout \
        --fastqout "${TMP_FASTQ2}" 2>> "${LOG}"

    # Discard sequences containing Ns, convert to fasta
    "${VSEARCH}" \
        --quiet \
        --fastq_filter "${TMP_FASTQ}" \
        --fastq_maxns 0 \
        --fastaout "${TMP_FASTA}" 2>> "${LOG}"

in the second command, shouldn't we use --fastq_filter "${TMP_FASTQ2}" ?

cheers
Damian

Take care with samples with no matching tags&primers

Dear Frederic (& others)

I just realized a potential pitfall with this pipeline (and similar approaches).
Like me, you may include empty/blank samples (or have real samples with no reads matching tags and primer sequences). When processing such a sample, Cutadapt will eventually have nothing to work on (so to speak), resulting in an empty file after removal of tags and primers. However vsearch will just work on the (fasta) tmp-file from the previous sample, resulting in the final dereplicated file (S00x.fas) for the current empty/blank/negative being identical to the previous "real" sample. This is of course easy to spot in a project with only a few samples, but may be less apparent with large datasets.

You will need to identify samples that were devoid of reads matching the actual tags for that sample, and remove them before downstream processing. They can be identified eg by searching for the sentence "Unable to read from file" in the logfiles. For example like this:

grep -c "Unable to read from file" S[0-9][0-9][0-9]*log

Regards
Tobias

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.