Giter VIP home page Giter VIP logo

nf-core / airrflow Goto Github PK

View Code? Open in Web Editor NEW
43.0 143.0 30.0 8.38 MB

B-cell and T-cell Adaptive Immune Receptor Repertoire (AIRR) sequencing analysis pipeline using the Immcantation framework

Home Page: https://nf-co.re/airrflow

License: MIT License

HTML 0.88% R 5.24% Python 12.80% Nextflow 76.10% Shell 4.62% CSS 0.37%
b-cell immcantation immunorepertoire repseq nf-core nextflow workflow pipeline airr

airrflow's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

airrflow's Issues

Add log parsing as part of the pipeline

Currently there is process logging as part of the pipeline already, but the logs need to be parsed for extracting the number and percentage of sequences that passed the process and the ones that were filetered out.

Add the python script to parse the logs as part of the pipeline.

Linting tests failing

Hey there are a couple of linting tests failing still in Travis, the rest passed!

INFO: ===========
LINTING RESULTS

82 tests passed 2 tests had warnings 2 tests failed
WARNING: Test Warnings:
http://nf-co.re/errors#4: Config variable not found: params.reads
http://nf-co.re/errors#4: Config variable not found: params.singleEnd
ERROR: Test Failures:
http://nf-co.re/errors#8: Conda environment name is incorrect (nf-core-bcellmagic-1.0, should be nf-core-bcellmagic-1.0dev)
http://nf-co.re/errors#8: Conda dependency did not have pinned version number: bioconda::biopython
ERROR: Sorry, some tests failed - exiting with a non-zero error code...

Trouble running bcellmagic

Hi.

I'm a new user and I'm trying to follow the Takara Bio TCR workflow.
However I'm having trouble with memory requirements.
I'm getting the following error:

Error executing process > 'NFCORE_BCELLMAGIC:BCELLMAGIC:FETCH_DATABASES (IMGT IGBLAST)'

Caused by:
Process requirement exceed available memory -- req: 12 GB; avail: 8 GB

Command executed:

fetch_databases.sh
echo $(date "+%F") > IMGT.version.txt

Command exit status:

Command output:
(empty)

Work dir:
/Volumes/Hard drive/bcellmagic/work/db/c293fe251fb5decf6087e4940e22a7

Tip: view the complete command output by changing to the process work dir and entering the command cat .command.out

Would you have any ideas please?

Thanks so much.
Aislinn.

Dowser lineages `igphyml` when using `r-enchantr=0.0.3`

Description of the bug

There is an error in the Dowser lineages process when running the pipeline with the test profile (uses r-enchantr 0.0.3)

Command used and terminal output

run nf-core/airrflow -r airrflow -profile test,docker --outdir "results" -resume


label: unnamed-chunk-2
Quitting from lines 124-159 (_main.Rmd)
Error in buildIgphyml(data, igphyml = exec, temp_path = file.path(dir,  :
  The file /usr/local/share/igphyml/src/igphyml cannot be executed.
Calls: <Anonymous> ... eval_with_user_handlers -> eval -> eval -> getTrees -> buildIgphyml
In addition: Warning messages:
1: replacing previous import ‘data.table::last’ by ‘dplyr::last’ when loading ‘enchantr’
2: replacing previous import ‘data.table::first’ by ‘dplyr::first’ when loading ‘enchantr’
3: replacing previous import ‘data.table::between’ by ‘dplyr::between’ when loading ‘enchantr’
4: In buildIgphyml(data, igphyml = exec, temp_path = file.path(dir,  :
  Dowser igphyml doesn't mask split codons!


### Relevant files

_No response_

### System information

nextflow version 22.10.2.5832

GUNZIP fails on Google Cloud

Hi There!

I wanted to test out some cloud instances using nf-core airrflow. I launched the test pipeline on my laptop and that worked just fine.

But when I wanted to port this to the cloud, the test pipeline fails at the GUNZIP step. I noticed that the AWS results page is also not showing any results. Maybe this issue is similar on AWS?

Description of the bug

Airrflow test pipeline fails at the GUNZIP workflow on Google Cloud. It fails because it can not find the FASTQ files that it needs to unzip.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line:
    nextflow run nf-core/airrflow -r 2.0.0 -profile test,google --google_bucket gs://my-bucket
  2. See error:
Error executing process > 'NFCORE_BCELLMAGIC:BCELLMAGIC:GUNZIP (Sample3)'

Caused by:
  Process `NFCORE_BCELLMAGIC:BCELLMAGIC:GUNZIP (Sample3)` terminated with an error exit status (9)

Command executed:

  gunzip -f "Sample3_UMI_R1.fastq.gz"
  gunzip -f "Sample3_R2.fastq.gz"
  echo $(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*$//' > gunzip.version.txt

Command exit status:
  9

Command output:
  (empty)

Command error:
  Execution failed: generic::failed_precondition: while running "nf-986be5683bbd72136fc79e9bc86b254e-main": unexpected exit status 1 was not ignored

Expected behaviour

Successful completion of the gunzip step.

Log files

Here is the .command.log file of the relevant process:

+ cd /light-test/e3/d02332d9d12ca6f93ddf5b85b853ad
+ gsutil -m -q cp gs://my-bucket//light-test/e3/d02332d9d12ca6f93ddf5b85b853ad/command.run .
+ bash .command.run nxf_stage
+ [[ '' -gt 0 ]]
+ true
+ cd /light-test/e3/d02332d9d12ca6f93ddf5b85b853ad
+ bash .command.run nxf_unstage
CommandException: No URLs matched: .command.out
CommandException: 1 file/object could not be transferred.
CommandException: No URLs matched: .command.err
CommandException: 1 file/object could not be transferred.
CommandException: No URLs matched: .command.trace
CommandException: 1 file/object could not be transferred.
CommandException: No URLs matched: .exitcode
CommandException: 1 file/object could not be transferred.
ls: cannot access '*_R1.fastq': No such file or directory
ls: cannot access '*_R2.fastq': No such file or directory
ls: cannot access '*.version.txt': No such file or directory

Nextflow Installation

21.10.6

Fastq samplesheet auto single_cell FALSE

Description of the bug

As input samplesheet only supports single_cell FALSE, autogenerate this instead of requiring column in samplesheet.

Command used and terminal output

No response

Relevant files

No response

System information

No response

vsearch in dependencies, but usearch called

At startup, I got a vsearch missing error, and installed that tool. Turns out however that ClusterSets.py still uses usearch.

I would suggest that either the tools are converted to vsearch (possibly the best option?), or that a usearch version check is performed at the beginning.

output organization

currently, there are a lot of subfoders in the results folder. I want to organize this to have only the following subfolders:

  • preprocessing
  • clonal_analysis
  • repertoire analysis
  • multiQC
  • pipeline_info

So main.nf and output docs need to be changed accordingly.

Use `enchantr` validate input also for `Fastq` samplesheet

Description of feature

As of now the input validation of the fastq samplesheet is done with a custom python script. It would be nice that this input would also be validated with the enchantr validate_input script.

Fastq tsv file

Reveal tsv file

The main difference is the column filename which is not present in the fastq tsv as 3 files might be provided in the columns filename_R1, filename_R2, filename_I1. Would it be possible to accept more than filename_xx column and allow also .fq-gz, and .fastq.gz extensions there?

clonal analysis

Clonal analysis results should produce consistently both png and svg plots.

clones graphml not stored

New release introduced an issue that graphml clones are not stored properly. Problem in calling dnapars from R script clonal_analysis.R

Database download

Currently the database download is a process inside the nexflow pipeline but it is actually also included in the dockerfile as part of container build.

We would need to decide which one is the best option and just go for one of them.

Improving documentation

  • Include all parameters in documentation
  • Input file description
  • Better usage description
  • Short description about RNAseq experimental design

MakeDb igblast missing light chain reference germlines

The script part in process CHANGEO_MAKEDB can be simplified by providing a folder path to -r, not all individual fasta files:
-r ${imgt_base}/${params.species}/vdj/. This would also fix a bug where Ig light chain reference germlines (IGKV, IGKJ, IGLV, IGLJ) are not being passed to -r.

Adding tests

  • Need to add minimal tests for travis in test.config.
  • Use subset of fastq file for one sample.
  • Need to remove real primer sequences for fakes.

Recommended parameters for Takara Bio SMARTer Human TCR v2 do not work. --index-file FALSE needs to also be set

Description of the bug

The instructions to run airrflow 2.3.0 according the "Takara Bio SMARTer Human TCR v2" section at https://nf-co.re/airrflow/2.3.0/usage#dt-oligo-rt-and-5race-pcr are as follows:

nextflow run nf-core/airrflow -profile docker \
--input samplesheet.tsv \
--library_generation_method dt_5p_race_umi \
--cprimers CPrimers.fasta \
--race_linker linker.fasta \
--umi_length 12 \
--umi_position R2 \
--cprimer_start 5 \
--cprimer_position R1 \
--outdir ./results

However, when using these settings on my data, the pipeline rapidly exits with the following error:

Cannot invoke method and() on null object

 -- Check script '/home/ubuntu/.nextflow/assets/nf-core/airrflow/./workflows/bcellmagic.nf' at line: 100 or see '.nextflow.log' file for more details

Examining line 100 in bcellmagic.nf, as suggested, gives the following:

if (params.index_file & params.umi_position == 'R2') {exit 1, "Please do not set `--umi_position` option if index file with UMIs is provided."}

Setting the parameter --index_file FALSE solves this issue and the pipeline successfully launches. This suggests the pipeline expects a default value of FALSE to be present for params.index_file, but that this parameter value is in fact empty.

Command used and terminal output

No response

Relevant files

No response

System information

No response

Automatic handling of NA threshold

Currently when a threshold could not be defined in the shazam step (naive samples) the pipeline breaks. Solution is to submit it with a manually defined threshold. This should be handled automatically.

Add flexibility cprimers / vprimers in R1 or R2

As reported by @fabio-t , the pipeline currently expects C-region primers to be in the R1 fastq and V-region primers to be in the R2 fastq.

Better docs about this or the possibility of specifying R1/R2 for both primers would be needed.

Error when executing pipeline on AWS with fusion mounts

Hi, this is kojix2.

I'm a newbie and have very little understanding of what bcellmagic is a workflow for.
First, to understand what bcellmagic can do, I ran -profile test,docker in Nextflow Tower and got the following error.

What are the possible causes of this?

 Workflow execution completed unsuccessfully

The exit status of the task that caused the workflow execution to fail was: 1

Error executing process > 'NFCORE_BCELLMAGIC:BCELLMAGIC:ALAKAZAM_SHAZAM_REPERTOIRES (report)'

Caused by:
  Essential container in task exited

Command executed:

  execute_report.R repertoire_comparison.Rmd
  Rscript -e "library(alakazam); write(x=as.character(packageVersion('alakazam')), file='alakazam.version.txt')"
  Rscript -e "library(shazam); write(x=as.character(packageVersion('shazam')), file='shazam.version.txt')"
  echo $(R --version 2>&1) | awk -F' '  '{print $3}' > R.version.txt

Command exit status:
  1

Command output:
    |............................................................          |  86%
    ordinary text without R code
  
  
    |                                                                            
    |..............................................................        |  88%
  label: unnamed-chunk-10 (with options) 
  List of 4
   $ echo     : logi FALSE
   $ fig.width: num 10
   $ fig.asp  : num 0.8
   $ fig.align: chr "center"
  
  
    |                                                                            
    |...............................................................       |  91%
    ordinary text without R code
  
  
    |                                                                            
    |.................................................................     |  93%
  label: unnamed-chunk-11 (with options) 
  List of 4
   $ echo     : logi FALSE
   $ fig.width: num 10
   $ fig.asp  : num 0.8
   $ fig.align: chr "center"
  
  
    |                                                                            
    |...................................................................   |  95%
    ordinary text without R code
  
  
    |                                                                            
    |....................................................................  |  98%
  label: unnamed-chunk-12 (with options) 
  List of 4
   $ echo     : logi FALSE
   $ fig.width: num 10
   $ fig.asp  : num 0.8
   $ fig.align: chr "center"
  
  
    |                                                                            
    |......................................................................| 100%
    ordinary text without R code
  
  
  /usr/local/bin/pandoc +RTS -K512m -RTS repertoire_comparison.knit.md --to html4 --from markdown+autolink_bare_uris+tex_math_single_backslash --output /tmp/nxf.XXXXj9Glzr/Bcellmagic_report.html --lua-filter /usr/local/lib/R/library/rmarkdown/rmarkdown/lua/pagebreak.lua --lua-filter /usr/local/lib/R/library/rmarkdown/rmarkdown/lua/latex-div.lua --self-contained --variable bs3=TRUE --standalone --section-divs --table-of-contents --toc-depth 3 --variable toc_float=1 --variable toc_selectors=h1,h2,h3 --variable toc_collapsed=1 --variable toc_smooth_scroll=1 --variable toc_print=1 --template /usr/local/lib/R/library/rmarkdown/rmd/h/default.html --highlight-style pygments --variable theme=bootstrap --css nf-core_style.css --include-in-header /tmp/Rtmp9ZwV8C/rmarkdown-str3a595dbe04.html --mathjax --variable 'mathjax-url:https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' --citeproc 

Command error:
  
  
  processing file: repertoire_comparison.Rmd
  output file: repertoire_comparison.knit.md
  
  File ./references.bibtex not found in resource path
  Error: pandoc document conversion failed with error 99
  Execution halted

Work dir:
  /fusion/s3/ABCDEFG/bcellmagic/wd/25/96cdb23bc7f02ad73b2a5a2e087162

Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line

mv command in merge r1 umi

Original bug report from Susanna Marquez:

I have just now started trying bcellmagic. I get the error below. Any suggestions? .command.sh simply has the same error message, about moving the files.

[14/b08493] process > get_software_versions      [100%] 1 of 1 ✔ 
[-        ] process > multiqc                    - 
[a9/753701] process > output_documentation (1)   [100%] 1 of 1 ✔ 
Execution cancelled -- Finishing pending tasks before exit 
[0;35m[nf-core/bcellmagic] Pipeline completed with errors
Error executing process > 'merge_r1_umi (HD07M)'

Caused by:
 Process `merge_r1_umi (HD07M)` terminated with an error exit status (1)

Command executed:

 gunzip -f "HD07M_R1.fastq.gz"
 mv "HD07M_R1.fastq" "HD07M_R1.fastq"
 gunzip -f "HD07M_R2.fastq.gz"
 mv "HD07M_R2.fastq" "HD07M_R2.fastq"

Command exit status:
 1

Command output:
 (empty)

Command error:
 mv: 'HD07M_R1.fastq' and 'HD07M_R1.fastq' are the same file

Work dir:
 /home/susanna/Documents/projects/collaborations/bcell-magic/work/e3/e7655ee3723008a802d5b887ea585a

Process labels and versions

  • make sure all processes have resource labels
  • make sure all processes print the versions properly
  • define clones and create germlines results within changeo, not shazam

Make docker image nf-core compliant

At the moment, most tools that are build in the docker image do not make use of any conda channel. The aim is to bring all tools in the docker image to conda. In the following, all tools in the container are listed. The bracket notation tells, if the tool is already in conda or not.

  • protocol data, utility scripts, pipelines [--> not needed for now]
  • muscle muscle
  • vsearch vsearch
  • cd-hit cd-hit
  • blastx + executables blast
  • igblast igblast
  • phylib phylib
  • tbl2asn tbl2asn
  • airr reference libraries airr r-airr
  • presto presto
  • changeo changeo
  • alakazam alakazam
  • shazam shazam
  • tigger tigger
  • rdi [--> not needed for now]
  • scope [not needed for now]
  • prestor [not needed for now]
  • download and build reference databases [inside of the container?! --> not]

Fetch databases separate from build igblast reference

As proposed by @ssnn-airr, it would be nice to separate the process in fetch_databases between pulling the references from IMGT and building the igblast references, to also allow users to provide their own fasta databases but still build them with igblast.

Currently, the fetch_databases.sh script is executed in the Fetch_databases process which calls the underlying scripts:

  • fetch_imgt.sh pulls the references directly from IMGT for 4 species and stores them in fasta format.
  • fetch_igblastdb.sh gets some internal data needed for igblast
  • imgt2igblast.sh uses makeblastdb to create the necessary Igblast references.

That could be separated into two processes, one for pulling the fasta references, and one for building the necessary references for igblast.

So if the reference data is available in IMGT, one could adapt the first script to also pull it and then build the igblast needed references. Or if custom reference data is available in fasta format, it should also be possible to provide it directly to imgt2igblast to build the references.

BCELLMAGIC: Convert all parameter docs to JSON schema

Hi!

this is not necessarily an issue with the pipeline, but in order to streamline the documentation group next week for the hackathon, I'm opening issues in all repositories / pipeline repos that might need this update to switch from parameter docs to auto-generated documentation based on the JSON schema.

This will then supersede any further parameter documentation, thus making things a bit easier :-)

If this doesn't apply (anymore), please close the issue. Otherwise, I'm hoping to have some helping hands on this next week in the documentation team on Slack https://nfcore.slack.com/archives/C01QPMKBYNR

Check whether file compressed or not

It would be nice if the pipeline didn't crash when using plain FASTQ files, rather than gzipped ones :)

Not a huge deal, but it's not specified into the documentation (or I missed it).

Define clones per patient

  • Join samples belonging to the same patient before running define clones.
  • Check join operator from nextflow to do this.

Updating immcantation tool versions

Description of feature

Updating Immcantation tool versions for mulled containers (here the packages with the old versions are listed):

  • conda-forge::r-base=4.1.2 bioconda::r-alakazam=1.2.0 bioconda::changeo=1.2.0 bioconda::phylip=3.697 conda-forge::r-optparse=1.7.1
  • conda-forge::r-base=4.1.2 bioconda::r-alakazam=1.2.0 bioconda::r-shazam=1.1.0 conda-forge::r-kableextra=1.3.4 conda-forge::r-knitr=1.33 conda-forge::r-stringr=1.4.0 conda-forge::r-dplyr=1.0.6 conda-forge::r-optparse=1.7.1
  • bioconda::changeo=1.2.0 bioconda::igblast=1.17.1 conda-forge::wget=1.20.1
  • conda-forge::r-base=4.1.2 bioconda:r-alakazam=1.2.0 bioconda::changeo=1.2.0 bioconda::igphyml=1.1.3
  • bioconda::changeo=1.2.0 bioconda::igblast=1.17.1
  • conda-forge::r-base=4.1.2 bioconda:r-enchantr=0.0.1
  • enchantr
  • mulled
    And whatever containers are needed for the reveal processes :D

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.