Giter VIP home page Giter VIP logo

nf-core / metatdenovo Goto Github PK

View Code? Open in Web Editor NEW
12.0 147.0 10.0 13.44 MB

Assembly and annotation of metatranscriptomic or metagenomic data for prokaryotic, eukaryotic and viruses.

Home Page: https://nf-co.re/metatdenovo

License: MIT License

HTML 1.44% Python 5.30% Nextflow 65.30% Groovy 27.96%
nf-core nextflow pipeline workflow eukaryotes metagenomics metatranscriptomics prokaryotes viruses

metatdenovo's Introduction

nf-core/metatdenovo

[![GitHub Actions CI Status](https://github.com/nf-core/metatdenovo/workflows/nf-core%20CI/badge.svg)](https://github.com/nf-core/metatdenovo/actions?query=workflow%3A%22nf-core+CI%22) [![GitHub Actions Linting Status](https://github.com/nf-core/metatdenovo/workflows/nf-core%20linting/badge.svg)](https://github.com/nf-core/metatdenovo/actions?query=workflow%3A%22nf-core+linting%22)[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/metatdenovo/results)[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.XXXXXXX-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.XXXXXXX)

Nextflow run with conda run with docker run with singularity Launch on Nextflow Tower

Get help on SlackFollow on TwitterFollow on MastodonWatch on YouTube

Introduction

nf-core/metatdenovo is a bioinformatics best-practice analysis pipeline for assembly and annotation of metatranscriptomic data, both prokaryotic and eukaryotic.

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website.

Pipeline summary

nf-core/metatdenovo metro map

  1. Read QC (FastQC)
  2. Present QC for raw reads (MultiQC)
  3. Quality trimming and adapter removal for raw reads (Trim Galore!)
  4. Optional: Filter sequences with BBduk
  5. Optional: Normalize the sequencing depth with BBnorm
  6. Merge trimmed, pair-end reads (Seqtk)
  7. Choice of de novo assembly programs:
    1. RNAspades suggested for Eukaryote de novo assembly
    2. Megahit suggested for Prokaryote de novo assembly
  8. Choice of orf caller:
    1. TransDecoder suggested for Eukaryotes
    2. Prokka suggested for Prokaryotes
    3. Prodigal suggested for Prokaryotes
  9. Quantification of genes identified in assemblies:
    1. Generate index of assembly (BBmap index)
    2. Mapping cleaned reads to the assembly for quantification (BBmap)
    3. Get raw counts per each gene present in the assembly (Featurecounts) -> TSV table with collected featurecounts output
  10. Functional annotation:
    1. Eggnog -> Reformat TSV output "eggnog table"
    2. KOfamscan
    3. HMMERsearch -> Ranking orfs based on HMMprofile with Hmmrank
  11. Taxonomic annotation:
    1. EUKulele -> Reformat TSV output "Reformat_tax.R"
    2. CAT
  12. Summary statistics table. "Collect_stats.R"

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv:

| sample   | fastq_1                   | fastq_2
| -------- | ------------------------- | ------------------------- |
| sample1  | ./data/S1_R1_001.fastq.gz | ./data/S1_R2_001.fastq.gz |
| sample2  | ./data/S2_fw.fastq.gz     | ./data/S2_rv.fastq.gz     |
| sample3  | ./S4x.fastq.gz            | ./S4y.fastq.gz            |
| sample4  | ./a.fastq.gz              | ./b.fastq.gz              |

Each row represents a fastq file (single-end) or a pair of fastq files (paired-end).

Now, you can run the pipeline using:

nextflow run nf-core/metatdenovo \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --outdir <OUTDIR>

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Note

Tables in summary_tables directory under the output directory are made especially for further analysis in tools like R or Python.

Credits

nf-core/metatdenovo was originally written by Danilo Di Leo (@danilodileo), Emelie Nilsson (@emnilsson) & Daniel Lundin (@erikrikarddaniel).

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #metatdenovo channel (you can join with this invite).

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

metatdenovo's People

Contributors

danilodileo avatar emnilsson avatar erikrikarddaniel avatar nf-core-bot avatar tfalkarkea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metatdenovo's Issues

update README.md

Description of feature

We need to update the Credits section and go through all the text.

"nf-core/metatdenovo was originally written by Danilo Di Leo, Emelie Nilsson & Daniel Lundin.

We thank the following people for their extensive assistance in the development of this pipeline: Emelie Nilsson (emnilsson) and Danilo Di Leo (Danilo2771)"

" nf-core/metatdenovo was originally written by Danilo Di Leo (username), Emelie Nilsson (username) & Daniel Lundin (username)."

Update documentation

Describe the general outline, report programs used and update authors: README.md, CITATIONS.md, docs/output.md, docs/readme.md etc.

update parameters section

Description of feature

we need to go through the parameters section and check that all the options appear in the order as the pipeline is running. if needed, we should add extra help text.

update nf-core modules

Description of feature

We need to update the nf-core modules before the first release.

Handle allowed values for orf caller better

Description of the bug

There's a way to specify allowed values for a parameter in nextflow_schema.json (see e.g. the dada_ref_taxonomy in the Ampliseq file). We should do the same for the orf caller parameter.

Eukulele called in the wrong way with user assembly

I get the following error when running a project with a user assembly (--assembly):

Error executing process > 'NFCORE_METATDENOVO:METATDENOVO:UNPIGZ_EUKULELE (null)'

Caused by:
  Not a valid path value type: java.util.LinkedHashMap ([id:user_assembly])


Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`

I believe the reason is that I haven't updated Eukulele after fixing channels so they always contain the meta object.

Start pipeline from a finished assembly

Is your feature request related to a problem? Please describe

Describe the solution you'd like

Describe alternatives you've considered

Additional context

Add option to keep removed contaminants

When removing contaminants (e.g. stable RNA) as in #4 we should provide the user with the option of keeping the reads that are filtered out. It is possible to do this with e.g. bbduk by adding outm (outm1 and outm2 for pair-end reads), where matching reads will be stored, see "kmer filtering".

update metatdenovo diagram

Description of feature

We added new modules, we need to update the diagram too. (KOfamscan, BBduk and BBnorm)

Create a full scale test

Add a full scale AWS test.

  • Use data from ENA or SRA (one of the example datasets from the manuscript)
  • Upload taxonomy and function databases to S3 (how does one do that?)

Fix file arguments in `nextflow_schema.json`

Our file parameters are declared as string, but I think they should also have a "format": "file-path". They also lack mime types and icons. There are a icons for a few typical biological file types, I think, the rest can have far fa-file-code. In my phyloplace pipeline I've used declarations such as this:

                "hmmfile": {
                    "type": "string",
                    "format": "file-path",
                    "mimetype": "text/plain",
                    "description": "HMM file. If provided, will be used to align both the reference and query sequences.",
                    "fa_icon": "far fa-file-code"
                }

I'm not sure what the guidelines say...

`COLLECT_STATS` does not run when `--skip_trimming` is set

Description of the bug

When running the summary_table PR (#46 ) and adding --skip_trimming no overall_stats.tsv-file is created since the COLLECT_STATS process isn't run.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line: nextflow run main.nf -profile test -resume --skip_trimming
  2. See error: no overall_stats.tsv in results/summary_tables/collect/

Expected behaviour

That COLLECT_STATS would run anyway but that overall_stats.tsv would lack some numbers/contain NA

Log files

Have you provided the following extra information/files:

  • The command used to run the pipeline
  • The .nextflow.log file

System

  • Hardware: mac?
  • Executor: local
  • OS: macOS
  • Version: 10.14.6

Nextflow Installation

  • Version: 21.10.6

Container engine

  • Engine: Docker
  • version: 4.4.2

CAT/BAT

Discuss with mag people how to do: nf-core module now, later or never (copy code).

Perform khmer digital normalization

Reducing sequencing depth with e.g. khmer is sometimes the only way of performing an assembly from all samples at once.
Contribute a khmer module to nf-core and incorporate here as an option.

Cat-ting after `prokka` only happens for one file when `--assembly`?

Check Documentation

I have checked the following places for your error:

Description of the bug

Iā€™m rerunning the pipeline to test with gtdb in eukulele, but I think something is broken when it comes to prokka. Iā€™m providing my own assembly (because it forced me to rerun instead of -resume) with --assembly, and the prokka-part of that subworkflow runs for 48 files, but all the subsets are named user_assembly. The following *_CAT processes all just cat one user_genome, resulting in a lot of data lost.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line: nextflow run lnuc-eemis/metatdenovo -r dev -profile uppmax -params-file nextflow.yml -resume -c use_custom.config
  • nextflow.yml contains: --input samples.csv --sequence_filter PATH --skip_trimming false --orf_caller 'prokka' --eggnog true --eggnog_dbpath PATH --skip_eukulele false --eukulele_dbpath PATH
  1. See error: No error provided, but the output of the prokka-subworkflow does not contain all the genes produced by prokka.

Expected behaviour

That the output of the prokka-subworkflow should contain all the genes produced by prokka.

Log files

Have you provided the following extra information/files:

  • The command used to run the pipeline
  • The .nextflow.log file

System

  • Hardware: HPCslurm
  • OS: Linux
  • Version dev/v2.5.1-g55fa65c

Nextflow Installation

  • Version: 22.10.1

Container engine

  • Engine: singularity/apptainer
  • version: ...

prodigal output not coherent with orf id

The orf 'id' of prodigal output is weird. Both gff and .fna/.faa output have different names.

As reported in prodigal repository with an issue hyattpd/Prodigal#99 , the orfs get an extra "_1" or similar. this is problematic as none of gff, .fna and featurecounts tables have the same name. it will be impossible then merge the final tables.
I guess that it is an issue inside prodigal, so what we could probably do it is to write an external code (new module for sub workflow) that fix the tables after featurecounts.

Some files are not published in the correct results directories, some files are published even though not specified

Check Documentation

I have checked the following places for your error:

Description of the bug

The contigs-file from megahit is not published in results/megahit nor is the log from the same process (but that is already reported in #27.
In results/bbmap a ref directory is copied, and I cannot find where this is specified.
In results/prokka a directory for the different samples are created with all the prokka-output, which is not what I want, but I cannot figure out why.
There might be more oddities, but these are the ones I've spotted.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line: nextflow run . -profile docker,test --orf_caller prokka
  2. See output in results

Expected behaviour

I thought that the files you specify in conf/modules.config with e.g. pattern is what gets copied into the results-directory.

Log files

Have you provided the following extra information/files:

  • The command used to run the pipeline
  • The .nextflow.log file

System

  • Hardware: mac?
  • Executor: local
  • OS: macOS
  • Version 10.14.6

Nextflow Installation

  • Version: 21.10.6

Container engine

  • Engine: Docker
  • version: v20.10.11

Additional context

Tidy up params in `nextflow.config` and `nextflow_schema.json`

I'm missing a lot of params (I think) in these two config files. I will try to remember write them down here as I find them in the code.

We still have a couple of params generated from the template that I think are not used: igenomes_base, igenomes_ignore, genomes and fasta (fasta only in nextflow_schema.json.

Remember to also update conf/test.config -- genome is used there.

`UNPIGZ_MEGAHIT_CONTIGS` is run even when `prokka` is used instead of `prodigal`

Check Documentation

I have checked the following places for your error:

Description of the bug

The process UNPIGZ_MEGAHIT_CONTIGS is used to decompress the contigs coming from megahit before prodigal is run (I think this is the only reason, at least), but it is also run when prokka is specified instead.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line: nextflow run . -profile docker,test --orf_caller prokka
  2. See error: -

Expected behaviour

That unzipping the contig-files wasn't done if it's not needed.

Log files

Have you provided the following extra information/files:

  • The command used to run the pipeline
  • The .nextflow.log file

System

  • Hardware: mac?
  • Executor: local
  • OS: macOS
  • Version 10.14.6

Nextflow Installation

  • Version: 21.10.6

Container engine

  • Engine: Docker
  • version: v20.10.11

Additional context

RNASpades called multiple times

As it is now, RNASpades will be called once for every pair of read files. There are a number of ways we could deal with this, but I think the best would be if we used the interleaved files that Megahit gets and wrote a yaml file looking like this:

[
  {
    orientation: "fr",
        type: "paired-end",
        interlaced reads: [
          "/home/dl/dev/metatdenovo/work/91/febcd19737053caccd6012ee9be7c3/SAMPLE1_PE_T1.fastq.gz",
          "/home/dl/dev/metatdenovo/work/bc/49c788e297c9648b76ac98268b0ef6/SAMPLE2_PE_T1.fastq.gz"
        ]
      }
  }
]

(Not tested.) See https://cab.spbu.ru/files/release3.12.0/manual.html#yaml.

Perhaps used in nf-core/mag?

Prokka fails for rnaspades

Check Documentation

I have checked the following places for your error:

Description of the bug

When running nextflow run . -profile docker,test --assembler rnaspades --orf_caller prokka, prokka fails since the contig names are too long.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line: nextflow run . -profile docker,test --assembler rnaspades --orf_caller prokka
  2. See error:
    [07:17:58] Contig ID must <= 37 chars long: NODE_1_length_638965_cov_4.730899_g0_i0 [07:17:58] Please rename your contigs OR try '--centre X --compliant' to generate clean contig names.

Expected behaviour

Log files

Have you provided the following extra information/files:

  • The command used to run the pipeline
  • The .nextflow.log file

System

  • Hardware:
  • Executor:
  • OS:
  • Version

Nextflow Installation

  • Version:

Container engine

  • Engine: docker
  • version:

Additional context

Cosmetics before release

[ ] The SUB_EUKULELE:EUKULELE_DB module outputs ($meta.id) (single quotes?)
[ ] Prokka output files end with an extra .gz.faa, .gz.gff etc.
[ ] When unzipping protein files the process name is UNPIGZ_CONTIGS
[ ] Process name for eukulele download, change to EUKULELE_DOWNLOAD_DB

Trim and QC

In magmap I just stole a subworkflow from rnaseq.

Overall success table

Output a table with summarise from trimming, mapping to contigs and ORFs, plus to ORFs with taxonomy and different functional annotations.

Versions are not reported for `khmer` or `pigz` from the `UNPIGZ_MEGAHIT_CONTIGS`-process

Check Documentation

I have checked the following places for your error:

Description of the bug

Versions are not reported in results/pipeline_info/software_versions.yml for diginorm/khmer or for pigz that is used in the UNPIGZ_MEGAHIT_CONTIGS-process.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line: nextflow run . -profile docker,test_diginorm --orf_caller prokka
  2. See: results/pipeline_info/software_versions.yml

Expected behaviour

That all versions are reported, at least for the processes that are run by the tests.

Log files

Have you provided the following extra information/files:

  • The command used to run the pipeline
  • The .nextflow.log file

System

  • Hardware: mac?
  • Executor: local
  • OS: macOS
  • Version 10.14.6

Nextflow Installation

  • Version: 21.10.6

Container engine

  • Engine: Docker
  • version: v20.10.11

Additional context

nf-core GitHub actions do not run

Description of the bug

The normal nf-core GitHub actions do not run for this pipeline. (They do for my magmap pipeline, also created recently from the templates.)

I think the setup is under .github. Compare our repo with a proper nf-core pipeline.

update usage.md

Description of feature

We need to update usage.md file.

  • remove run_dbcan
  • add bbnorm
  • add diginorm
  • add kofamscan

Call ORFs with Prodigal

As an alternative to full annotation with Prokka, call ORFs with Prodigal. Use the nf-core module.

Megahit assembly

There's an nf-core module for Megahit assembly. It's however written so it's easiest to call once per sample.
Moreover, we might have to deal with interleaved data since that's what khmer requires (IIRC).

Diginorm fails on sample 3 in the test

We think it might be because its fastq files (after cating) has content after /1 and /2.
It fails also when we only have a single sample 3, i.e. cat is not run.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.