Giter VIP home page Giter VIP logo

proteinfold's Introduction

nf-core/proteinfold

GitHub Actions CI Status GitHub Actions Linting StatusAWS CICite with Zenodo nf-test

Nextflow run with conda run with docker run with singularity Launch on Seqera Platform

Get help on SlackFollow on TwitterFollow on MastodonWatch on YouTube

Introduction

nf-core/proteinfold is a bioinformatics best-practice analysis pipeline for Protein 3D structure prediction.

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website.

Pipeline summary

Alt text

  1. Choice of protein structure prediction method:

    i. AlphaFold2 - Regular AlphaFold2 (MSA computation and model inference in the same process)

    ii. AlphaFold2 split - AlphaFold2 MSA computation and model inference in separate processes

    iii. ColabFold - MMseqs2 API server followed by ColabFold

    iv. ColabFold - MMseqs2 local search followed by ColabFold

    v. ESMFold - Regular ESM

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

Now, you can run the pipeline using:

nextflow run nf-core/proteinfold \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --outdir <OUTDIR>

The pipeline takes care of downloading the databases and parameters required by AlphaFold2, Colabfold or ESMFold. In case you have already downloaded the required files, you can skip this step by providing the path to the databases using the corresponding parameter [--alphafold2_db], [--colabfold_db] or [--esmfold_db]. Please refer to the usage documentation to check the directory structure you need to provide for each of the databases.

  • The typical command to run AlphaFold2 mode is shown below:

    nextflow run nf-core/proteinfold \
        --input samplesheet.csv \
        --outdir <OUTDIR> \
        --mode alphafold2 \
        --alphafold2_db <null (default) | DB_PATH> \
        --full_dbs <true/false> \
        --alphafold2_model_preset monomer \
        --use_gpu <true/false> \
        -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>
  • Here is the command to run AlphaFold2 splitting the MSA from the prediction execution:

    nextflow run nf-core/proteinfold \
        --input samplesheet.csv \
        --outdir <OUTDIR> \
        --mode alphafold2 \
        --alphafold2_mode split_msa_prediction \
        --alphafold2_db <null (default) | DB_PATH> \
        --full_dbs <true/false> \
        --alphafold2_model_preset monomer \
        --use_gpu <true/false> \
        -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>
  • Below, the command to run colabfold_local mode:

    nextflow run nf-core/proteinfold \
        --input samplesheet.csv \
        --outdir <OUTDIR> \
        --mode colabfold \
        --colabfold_server local \
        --colabfold_db <null (default) | PATH> \
        --num_recycles_colabfold 3 \
        --use_amber <true/false> \
        --colabfold_model_preset "AlphaFold2-ptm" \
        --use_gpu <true/false> \
        --db_load_mode 0
        -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>
  • The typical command to run colabfold_webserver mode would be:

    nextflow run nf-core/proteinfold \
        --input samplesheet.csv \
        --outdir <OUTDIR> \
        --mode colabfold \
        --colabfold_server webserver \
        --host_url <custom MMSeqs2 API Server URL> \
        --colabfold_db <null (default) | PATH> \
        --num_recycles_colabfold 3 \
        --use_amber <true/false> \
        --colabfold_model_preset "AlphaFold2-ptm" \
        --use_gpu <true/false> \
        -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

    [!WARNING]

    If you aim to carry out a large amount of predictions using the colabfold_webserver mode, please setup and use your own custom MMSeqs2 API Server. You can find instructions here.

  • The esmfold mode can be run using the command below:

    nextflow run nf-core/proteinfold \
        --input samplesheet.csv \
        --outdir <OUTDIR> \
        --mode esmfold \
        --esmfold_model_preset <monomer/multimer> \
        --esmfold_db <null (default) | PATH> \
        --num_recycles_esmfold 4 \
        --use_gpu <true/false> \
        -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Credits

nf-core/proteinfold was originally written by Athanasios Baltzis (@athbaltzis), Jose Espinosa-Carrasco (@JoseEspinosa), Luisa Santus (@luisas) and Leila Mansouri (@l-mansouri) from The Comparative Bioinformatics Group at The Centre for Genomic Regulation, Spain under the umbrella of the BovReg project and Harshil Patel (@drpatelh) from Seqera Labs, Spain.

Many thanks to others who have helped out and contributed along the way too, including (but not limited to): Norman Goodacre and Waleed Osman from Interline Therapeutics (@interlinetx), Martin Steinegger (@martin-steinegger) and Raoul J.P. Bonnal (@rjpbonnal)

We would also like to thanks to the AWS Open Data Sponsorship Program for generously providing the resources necessary to host the data utilized in the testing, development, and deployment of nf-core proteinfold.

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #proteinfold channel (you can join with this invite).

Citations

If you use nf-core/proteinfold for your analysis, please cite it using the following doi: 10.5281/zenodo.7437038

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

proteinfold's People

Contributors

adamrtalbot avatar athbaltzis avatar bjlang avatar drpatelh avatar ewels avatar friederikehanssen avatar jfy133 avatar joseespinosa avatar l-mansouri avatar luisas avatar maxulysse avatar nf-core-bot avatar rjpbonnal avatar robsyme avatar ziadbkh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proteinfold's Issues

Download failing due to ssl verification

Description of the bug

Download fails due to SSL verification:

Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:45 [ERROR] CUID#7 - Download aborted. URI=http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  Exception: [AbstractCommand.cc:351] errorCode=1 URI=https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	    -> [SocketCore.cc:1018] errorCode=1 SSL/TLS handshake failure:  `not signed by known authorities or invalid' `issuer is not known'
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:45 [NOTICE] Download GID#9ac1e320f6fad7af not complete: 
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  Download Results:
Fri, 26 Aug 2022 11:49:55 +0000	  gid   |stat|avg speed  |path/URI
Fri, 26 Aug 2022 11:49:55 +0000	  ======+====+===========+=======================================================
Fri, 26 Aug 2022 11:49:55 +0000	  9ac1e3|ERR |       0B/s|http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  Status Legend:
Fri, 26 Aug 2022 11:49:55 +0000	  (ERR):error occurred.
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  aria2 will resume download if the transfer is restarted.
Fri, 26 Aug 2022 11:49:55 +0000	  If there are any errors, then see the log file. See '-l' option in help/man page for details.
Fri, 26 Aug 2022 11:49:55 +0000	Command wrapper:
Fri, 26 Aug 2022 11:49:55 +0000	  nxf-scratch-dir ip-10-0-97-116.ec2.internal:/tmp/nxf.ad23fp3kFc
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:44 [NOTICE] Downloading 1 item(s)
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:44 [NOTICE] CUID#7 - Redirecting to https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz

How can we deactivate SSL check or add the certificates to the appropriate container?

Command used and terminal output

022-08-26T11:55:28Z ๐’Š  Showing engine logs for 'bigMemCtx'
2022-08-26T11:55:28Z ๐’Š  Getting log stream for workflow run '7ec7d927-bcd3-4530-9991-c30f66b48f50'
Fri, 26 Aug 2022 11:45:02 +0000	=== ENVIRONMENT ===
Fri, 26 Aug 2022 11:45:02 +0000	AWS_BATCH_JOB_ATTEMPT=1
Fri, 26 Aug 2022 11:45:02 +0000	NF_WORKDIR=s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/runs
Fri, 26 Aug 2022 11:45:02 +0000	HOSTNAME=ip-10-0-125-245.ec2.internal
Fri, 26 Aug 2022 11:45:02 +0000	AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/49964af3-6bc3-4607-aa83-bbd7175e9caf
Fri, 26 Aug 2022 11:45:02 +0000	NF_JOB_QUEUE=arn:aws:batch:us-east-1:336191139269:job-queue/BatchTaskBatchJobQueue15-pqWIkgXIaUvwC6EY
Fri, 26 Aug 2022 11:45:02 +0000	AWS_EXECUTION_ENV=AWS_ECS_EC2
Fri, 26 Aug 2022 11:45:02 +0000	PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Fri, 26 Aug 2022 11:45:02 +0000	PWD=/opt/work
Fri, 26 Aug 2022 11:45:02 +0000	JAVA_HOME=/usr/lib/jvm/jre-openjdk/
Fri, 26 Aug 2022 11:45:02 +0000	AWS_METADATA_SERVICE_TIMEOUT=10
Fri, 26 Aug 2022 11:45:02 +0000	AWS_BATCH_JOB_ID=7ec7d927-bcd3-4530-9991-c30f66b48f50
Fri, 26 Aug 2022 11:45:02 +0000	NF_LOGSDIR=s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs
Fri, 26 Aug 2022 11:45:02 +0000	SHLVL=1
Fri, 26 Aug 2022 11:45:02 +0000	HOME=/root
Fri, 26 Aug 2022 11:45:02 +0000	AWS_BATCH_CE_NAME=BatchTaskBatchComputeEnv-TLqMfOSBKUUxVnpX
Fri, 26 Aug 2022 11:45:02 +0000	AWS_METADATA_SERVICE_NUM_ATTEMPTS=10
Fri, 26 Aug 2022 11:45:02 +0000	ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/8fe39e21-5c00-4515-99b2-ca67a6f114e3
Fri, 26 Aug 2022 11:45:02 +0000	ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/8fe39e21-5c00-4515-99b2-ca67a6f114e3
Fri, 26 Aug 2022 11:45:02 +0000	AWS_BATCH_JQ_NAME=BatchTaskBatchJobQueue15-pqWIkgXIaUvwC6EY
Fri, 26 Aug 2022 11:45:02 +0000	_=/usr/bin/printenv
Fri, 26 Aug 2022 11:45:02 +0000	=== RUN COMMAND ===
Fri, 26 Aug 2022 11:45:02 +0000	s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/workflow/proteinfold/workflow.zip
Fri, 26 Aug 2022 11:45:02 +0000	Creating config file: ./nextflow.config
Fri, 26 Aug 2022 11:45:02 +0000	=== CONFIGURATION ===
Fri, 26 Aug 2022 11:45:02 +0000	workDir = "s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/runs"
Fri, 26 Aug 2022 11:45:02 +0000	process.executor = "awsbatch"
Fri, 26 Aug 2022 11:45:02 +0000	process.queue = "arn:aws:batch:us-east-1:336191139269:job-queue/BatchTaskBatchJobQueue15-pqWIkgXIaUvwC6EY"
Fri, 26 Aug 2022 11:45:02 +0000	aws.batch.cliPath = "/opt/aws-cli/bin/aws"
Fri, 26 Aug 2022 11:45:02 +0000	== Restoring Session Cache ==
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/000003.log to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/000003.log
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/000005.log to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/000005.log
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/index.agitated_ardinghelli to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/index.agitated_ardinghelli
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/history to .nextflow/history
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/LOCK to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/LOCK
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/META-INF/MANIFEST.MF to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/META-INF/MANIFEST.MF
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/CURRENT to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/CURRENT
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/META-INF/extensions.idx to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/META-INF/extensions.idx
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory$_closure1.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory$_closure1.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonPlugin.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonPlugin.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor$_createExecutorService_closure2.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor$_createExecutorService_closure2.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonPlugin.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonPlugin.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/META-INF/MANIFEST.MF to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/META-INF/MANIFEST.MF
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/MANIFEST-000002 to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/MANIFEST-000002
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory$_closure2.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/AmazonClientFactory$_closure2.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor$_createExecutorService_closure1.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor$_createExecutorService_closure1.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/index.lethal_lamarck to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/index.lethal_lamarck
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor$_createExecutorService_closure3.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor$_createExecutorService_closure3.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchFileCopyStrategy.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchFileCopyStrategy.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/MANIFEST-000004 to .nextflow/cache/a4d22556-64c2-4f0f-b41b-cc8345cebb5f/db/MANIFEST-000004
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper$_closure1.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper$_closure1.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchExecutor.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchFileCopyStrategy.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchFileCopyStrategy.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper$_closure2.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchHelper$_closure2.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchProxy.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchProxy.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchProxy.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchProxy.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchScriptLauncher.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchScriptLauncher.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchScriptLauncher.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchScriptLauncher.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions$_makeVols_closure2.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions$_makeVols_closure2.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler$_findJobDef_closure2.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler$_findJobDef_closure2.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions$_makeVols_closure1.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions$_makeVols_closure1.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler$_terminateJob_closure1.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsBatchTaskHandler$_terminateJob_closure1.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/AwsOptions.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper$_closure2.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper$_closure2.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper$_closure1.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper$_closure1.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/S3Helper.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/S3Helper.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/BatchHelper.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/S3Helper.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/batch/S3Helper.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/util/S3PathSerializer.class to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/util/S3PathSerializer.class
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/util/S3PathSerializer.groovy to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/classes/nextflow/cloud/aws/util/S3PathSerializer.groovy
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-batch-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-batch-1.11.542.jar
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-ecs-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-ecs-1.11.542.jar
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-core-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-core-1.11.542.jar
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-kms-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-kms-1.11.542.jar
Fri, 26 Aug 2022 11:45:03 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/commons-codec-1.10.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/commons-codec-1.10.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-s3-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-s3-1.11.542.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/commons-logging-1.2.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/commons-logging-1.2.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/httpcore-4.4.9.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/httpcore-4.4.9.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-iam-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-iam-1.11.542.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-ec2-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/aws-java-sdk-ec2-1.11.542.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/httpclient-4.5.5.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/httpclient-4.5.5.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jmespath-java-1.11.542.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jmespath-java-1.11.542.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-annotations-2.6.0.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-annotations-2.6.0.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/nxf-s3fs-1.1.0.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/nxf-s3fs-1.1.0.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/joda-time-2.8.1.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/joda-time-2.8.1.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-dataformat-cbor-2.6.7.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-dataformat-cbor-2.6.7.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-core-2.6.7.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-core-2.6.7.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/ion-java-1.0.2.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/ion-java-1.0.2.jar
Fri, 26 Aug 2022 11:45:04 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/logs/.nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-databind-2.6.7.2.jar to .nextflow/plr/87881143a005e0d03979c7dc22790185/nf-amazon-1.0.5/lib/jackson-databind-2.6.7.2.jar
Fri, 26 Aug 2022 11:45:04 +0000	== Staging S3 Project ==
Fri, 26 Aug 2022 11:45:05 +0000	download: s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/workflow/proteinfold/workflow.zip to project/workflow.zip
Fri, 26 Aug 2022 11:45:05 +0000	Archive:  ./workflow.zip
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: .ipynb_checkpoints/MANIFEST-checkpoint.json  
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: .ipynb_checkpoints/Readme-checkpoint.txt  
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: .ipynb_checkpoints/inputs-checkpoint.json  
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: .ipynb_checkpoints/nextflow-checkpoint.config  
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: .ipynb_checkpoints/samplesheet-checkpoint.csv  
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: MANIFEST.json           
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: Readme.txt              
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: inputs.json             
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: nextflow.config         
Fri, 26 Aug 2022 11:45:05 +0000	  inflating: samplesheet.csv         
Fri, 26 Aug 2022 11:45:05 +0000	total 0
Fri, 26 Aug 2022 11:45:05 +0000	-rw-r--r-- 1 root root  161 Dec 31  1979 MANIFEST.json
Fri, 26 Aug 2022 11:45:05 +0000	-rw-r--r-- 1 root root  709 Dec 31  1979 Readme.txt
Fri, 26 Aug 2022 11:45:05 +0000	-rw-r--r-- 1 root root  234 Dec 31  1979 inputs.json
Fri, 26 Aug 2022 11:45:05 +0000	-rw-r--r-- 1 root root  147 Dec 31  1979 nextflow.config
Fri, 26 Aug 2022 11:45:05 +0000	-rw-r--r-- 1 root root  224 Dec 31  1979 samplesheet.csv
Fri, 26 Aug 2022 11:45:05 +0000	-rw-r--r-- 1 root root 3293 Aug 26 11:42 workflow.zip
Fri, 26 Aug 2022 11:45:05 +0000	cat ./project/MANIFEST.json
Fri, 26 Aug 2022 11:45:05 +0000	{
Fri, 26 Aug 2022 11:45:05 +0000	  "mainWorkflowURL": "https://github.com/ArlindNocaj/proteinfold.git",
Fri, 26 Aug 2022 11:45:05 +0000	  "inputFileURLs": [
Fri, 26 Aug 2022 11:45:05 +0000	    "inputs.json"
Fri, 26 Aug 2022 11:45:05 +0000	  ],
Fri, 26 Aug 2022 11:45:05 +0000	  "engineOptions": "-resume -r dev"
Fri, 26 Aug 2022 11:45:05 +0000	    
Fri, 26 Aug 2022 11:45:05 +0000	}
Fri, 26 Aug 2022 11:45:05 +0000	cat ./project/inputs.json
Fri, 26 Aug 2022 11:45:05 +0000	{"af2_db":"s3://alphafold2-agc/af2_db","full_dbs":false,"input":"s3://alphafold2-agc/input/samplesheet.csv","mode":"AF2","model_preset":"monomer","outdir":"s3://alphafold2-agc/results_samplesheet","skip_download":false,"use_gpu":true}== Running Workflow ==
Fri, 26 Aug 2022 11:45:05 +0000	nextflow run https://github.com/ArlindNocaj/proteinfold.git -resume -r dev -params-file ./project/inputs.json
Fri, 26 Aug 2022 11:45:05 +0000	nextflow pid: 54
Fri, 26 Aug 2022 11:45:05 +0000	[1]+  Running                 nextflow run $NEXTFLOW_PROJECT $NEXTFLOW_PARAMS &
Fri, 26 Aug 2022 11:45:05 +0000	waiting ..
Fri, 26 Aug 2022 11:45:10 +0000	N E X T F L O W  ~  version 21.04.3
Fri, 26 Aug 2022 11:45:10 +0000	Pulling ArlindNocaj/proteinfold ...
Fri, 26 Aug 2022 11:45:15 +0000	 downloaded from https://github.com/ArlindNocaj/proteinfold.git
Fri, 26 Aug 2022 11:45:15 +0000	Launching `ArlindNocaj/proteinfold` [silly_meucci] - revision: 01fbbca6fb [dev]
Fri, 26 Aug 2022 11:45:22 +0000	------------------------------------------------------
Fri, 26 Aug 2022 11:45:22 +0000	                                        ,--./,-.
Fri, 26 Aug 2022 11:45:22 +0000	        ___     __   __   __   ___     /,-._.--~'
Fri, 26 Aug 2022 11:45:22 +0000	  |\ | |__  __ /  ` /  \ |__) |__         }  {
Fri, 26 Aug 2022 11:45:22 +0000	  | \| |       \__, \__/ |  \ |___     \`-._,-`-,
Fri, 26 Aug 2022 11:45:22 +0000	                                        `._,._,'
Fri, 26 Aug 2022 11:45:22 +0000	  nf-core/proteinfold v1.0dev
Fri, 26 Aug 2022 11:45:22 +0000	------------------------------------------------------
Fri, 26 Aug 2022 11:45:22 +0000	Core Nextflow options
Fri, 26 Aug 2022 11:45:22 +0000	  revision     : dev
Fri, 26 Aug 2022 11:45:22 +0000	  runName      : silly_meucci
Fri, 26 Aug 2022 11:45:22 +0000	  launchDir    : /opt/work/7ec7d927-bcd3-4530-9991-c30f66b48f50/1
Fri, 26 Aug 2022 11:45:22 +0000	  workDir      : /agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/runs
Fri, 26 Aug 2022 11:45:22 +0000	  projectDir   : /root/.nextflow/assets/ArlindNocaj/proteinfold
Fri, 26 Aug 2022 11:45:22 +0000	  userName     : root
Fri, 26 Aug 2022 11:45:22 +0000	  profile      : standard
Fri, 26 Aug 2022 11:45:22 +0000	  configFiles  : /root/.nextflow/assets/ArlindNocaj/proteinfold/nextflow.config, /opt/work/7ec7d927-bcd3-4530-9991-c30f66b48f50/1/nextflow.config
Fri, 26 Aug 2022 11:45:22 +0000	Global options
Fri, 26 Aug 2022 11:45:22 +0000	  input        : s3://alphafold2-agc/input/samplesheet.csv
Fri, 26 Aug 2022 11:45:22 +0000	  outdir       : s3://alphafold2-agc/results_samplesheet
Fri, 26 Aug 2022 11:45:22 +0000	  use_gpu      : true
Fri, 26 Aug 2022 11:45:22 +0000	  skip_download: false
Fri, 26 Aug 2022 11:45:22 +0000	Alphafold2 options
Fri, 26 Aug 2022 11:45:22 +0000	  af2_db       : s3://alphafold2-agc/af2_db
Fri, 26 Aug 2022 11:45:22 +0000	  full_dbs     : false
Fri, 26 Aug 2022 11:45:22 +0000	Colabfold options
Fri, 26 Aug 2022 11:45:22 +0000	  colabfold_db : /nfs/users/cn/abaltzis/db/colabfold
Fri, 26 Aug 2022 11:45:22 +0000	!! Only displaying parameters that differ from the pipeline defaults !!
Fri, 26 Aug 2022 11:45:22 +0000	------------------------------------------------------
Fri, 26 Aug 2022 11:45:22 +0000	If you use nf-core/proteinfold for your analysis please cite:
Fri, 26 Aug 2022 11:45:22 +0000	* The nf-core framework
Fri, 26 Aug 2022 11:45:22 +0000	  https://doi.org/10.1038/s41587-020-0439-x
Fri, 26 Aug 2022 11:45:22 +0000	* Software dependencies
Fri, 26 Aug 2022 11:45:22 +0000	  https://github.com/nf-core/proteinfold/blob/master/CITATIONS.md
Fri, 26 Aug 2022 11:45:22 +0000	------------------------------------------------------
Fri, 26 Aug 2022 11:45:24 +0000	Uploading local `bin` scripts folder to s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/runs/tmp/31/edaebf4ca50c97c8dcd5b7bd2c2893/bin
Fri, 26 Aug 2022 11:45:28 +0000	[5e/0e0094] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_UNIPROT
Fri, 26 Aug 2022 11:45:28 +0000	[16/a3f66f] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:INPUT_CHECK:SAMPLESHEET_CHECK (samplesheet.csv)
Fri, 26 Aug 2022 11:45:28 +0000	[b5/b5c245] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_UNICLUST30
Fri, 26 Aug 2022 11:45:28 +0000	[7f/f044fe] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_SMALL_BFD
Fri, 26 Aug 2022 11:45:28 +0000	[5d/3e4746] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_PDB_MMCIF
Fri, 26 Aug 2022 11:45:28 +0000	[79/94c128] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_UNIREF90
Fri, 26 Aug 2022 11:45:28 +0000	[24/4620f1] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_AF2_PARAMS
Fri, 26 Aug 2022 11:45:28 +0000	[75/ce3948] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_PDB70
Fri, 26 Aug 2022 11:45:28 +0000	[69/2b3890] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_MGNIFY
Fri, 26 Aug 2022 11:45:29 +0000	[5d/ddf6b6] Submitted process > NFCORE_PROTEINFOLD:ALPHAFOLD2:MULTIQC
Fri, 26 Aug 2022 11:49:55 +0000	Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_PDB70'
Fri, 26 Aug 2022 11:49:55 +0000	Caused by:
Fri, 26 Aug 2022 11:49:55 +0000	  Process `NFCORE_PROTEINFOLD:ALPHAFOLD2:DOWNLOAD_AF2_DBS_AND_PARAMS:DOWNLOAD_PDB70` terminated with an error exit status (1)
Fri, 26 Aug 2022 11:49:55 +0000	Command executed:
Fri, 26 Aug 2022 11:49:55 +0000	  download_pdb70.sh $PWD
Fri, 26 Aug 2022 11:49:55 +0000	Command exit status:
Fri, 26 Aug 2022 11:49:55 +0000	  1
Fri, 26 Aug 2022 11:49:55 +0000	Command output:
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:44 [NOTICE] Downloading 1 item(s)
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:44 [NOTICE] CUID#7 - Redirecting to https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:45 [ERROR] CUID#7 - Download aborted. URI=http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  Exception: [AbstractCommand.cc:351] errorCode=1 URI=https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	    -> [SocketCore.cc:1018] errorCode=1 SSL/TLS handshake failure:  `not signed by known authorities or invalid' `issuer is not known'
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:45 [NOTICE] Download GID#9ac1e320f6fad7af not complete: 
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  Download Results:
Fri, 26 Aug 2022 11:49:55 +0000	  gid   |stat|avg speed  |path/URI
Fri, 26 Aug 2022 11:49:55 +0000	  ======+====+===========+=======================================================
Fri, 26 Aug 2022 11:49:55 +0000	  9ac1e3|ERR |       0B/s|http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  Status Legend:
Fri, 26 Aug 2022 11:49:55 +0000	  (ERR):error occurred.
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  aria2 will resume download if the transfer is restarted.
Fri, 26 Aug 2022 11:49:55 +0000	  If there are any errors, then see the log file. See '-l' option in help/man page for details.
Fri, 26 Aug 2022 11:49:55 +0000	Command wrapper:
Fri, 26 Aug 2022 11:49:55 +0000	  nxf-scratch-dir ip-10-0-97-116.ec2.internal:/tmp/nxf.ad23fp3kFc
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:44 [NOTICE] Downloading 1 item(s)
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:44 [NOTICE] CUID#7 - Redirecting to https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:45 [ERROR] CUID#7 - Download aborted. URI=http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  Exception: [AbstractCommand.cc:351] errorCode=1 URI=https://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	    -> [SocketCore.cc:1018] errorCode=1 SSL/TLS handshake failure:  `not signed by known authorities or invalid' `issuer is not known'
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  08/26 11:49:45 [NOTICE] Download GID#9ac1e320f6fad7af not complete: 
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  Download Results:
Fri, 26 Aug 2022 11:49:55 +0000	  gid   |stat|avg speed  |path/URI
Fri, 26 Aug 2022 11:49:55 +0000	  ======+====+===========+=======================================================
Fri, 26 Aug 2022 11:49:55 +0000	  9ac1e3|ERR |       0B/s|http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/old-releases/pdb70_from_mmcif_200916.tar.gz
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  Status Legend:
Fri, 26 Aug 2022 11:49:55 +0000	  (ERR):error occurred.
Fri, 26 Aug 2022 11:49:55 +0000	  
Fri, 26 Aug 2022 11:49:55 +0000	  aria2 will resume download if the transfer is restarted.
Fri, 26 Aug 2022 11:49:55 +0000	  If there are any errors, then see the log file. See '-l' option in help/man page for details.
Fri, 26 Aug 2022 11:49:55 +0000	Work dir:
Fri, 26 Aug 2022 11:49:55 +0000	  s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/runs/75/ce3948acad2e7ce3671579bc01c4f5
Fri, 26 Aug 2022 11:49:55 +0000	Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line
Fri, 26 Aug 2022 11:49:55 +0000	Execution cancelled -- Finishing pending tasks before exit
Fri, 26 Aug 2022 11:54:45 +0000	WARN: Cannot use mode `symlink` to publish files to path: /alphafold2-agc/af2_db -- Using mode `copy` instead


### Relevant files

_No response_

### System information

_No response_

DB paths not being resolved properly

Description of the bug

When running without --alphafold2_db (default) all of the DB paths are printed as null/ in the Execution log on stdout. Looks like there is an issue in the way these paths need to be resolved. Used v1.0.0 of the pipeline.

params.yml

input: 'https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/samplesheet/v1.0/samplesheet.csv'
outdir: 's3://<OUT_DIR>'
use_gpu: true

image

Will check to see if these databases are published where required to be re-used.

DOWNLOAD_PDBMMCIF process breaks

Description of the bug

As reported on this slack thread: "consistently seen failures at the DOWNLOAD_PDBMMCIF step. The output of .command.err at this step is below:

find: gunzip: Argument list too long
find: gunzip: Argument list too long
mv: can't rename './raw/00/*.cif': No such file or directory

Only first item in samplesheet processed when no db parameter provided

Description of the bug

When running colabfold_webserver and not providing the --colabfold_db parameter, see this slack thread or also when running the alphafold mode without providing the --alphafold_db parameter

Command used and terminal output

For colabfold:

$ NXF_VER=23.10.1 nextflow run main.nf -profile docker --outdir results -params-file params.yml -c custom.config

Where params.yml:

input: 'samples.csv'
outdir: 'results'
mode: 'colabfold'
colabfold_server: 'webserver'

For Alphafold:

NXF_VER=23.10.1 nextflow run main.nf -profile docker --outdir results -params-file params_af2.yml -c custom.config

Where params_af2.yml:

input: 'samples.csv'
outdir: 'results'
mode: 'alphafold2'
alphafold2_mode : 'standard'

And custom.config for both:

stubRun = true

params {
    config_profile_name        = 'Test profile'
    config_profile_description = 'Minimal test dataset to check pipeline function'

    // Limit resources so that this can run on GitHub Actions
    max_cpus   = 2
    max_memory = '6.GB'
    max_time   = '6.h'
}

process {
    withName: 'ARIA2' {
        container = 'biocontainers/gawk:5.1.0'
    }
    withName: 'UNTAR' {
        container = 'biocontainers/gawk:5.1.0'
    }
    withName: 'COLABFOLD_BATCH' {
        container = 'biocontainers/gawk:5.1.0'
    }
    withName: 'RUN_ALPHAFOLD2' {
        container = 'biocontainers/gawk:5.1.0'
    }
}

Relevant files

No response

System information

No response

Could not load dynamic library 'libcuda.so.1'

Description of the bug

I get the following errors using this command line:

nextflow run proteinfold-dev --input sample_multi.csv --outdir colabfold_test --mode colabfold --colabfold_server webserver --num_recycle 20 --colabfold_model_preset alphafold2_multimer_v3 -profile ifb_core

tried twice

Command used and terminal output

Command error:
  INFO:    Environment variable SINGULARITYENV_TMPDIR is set, but APPTAINERENV_TMPDIR is preferred
  INFO:    Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred
  2023-05-26 18:16:41.703867: W external/org_tensorflow/tensorflow/tsl/platform/default/dso_loader.cc:66
] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: 
No such file or directory; LD_LIBRARY_PATH: /localcolabfold/colabfold-conda/lib:/usr/local/cuda/lib64:/.
singularity.d/libs
  2023-05-26 18:16:41.703924: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:265] failed call to cuInit: UNKNOWN ERROR (303)
  2023-05-26 18:16:52.656784: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
  
    0%|          | 0/450 [elapsed: 00:00 remaining: ?]
  SUBMIT:   0%|          | 0/450 [elapsed: 00:00 remaining: ?]
  COMPLETE:   0%|          | 0/450 [elapsed: 00:00 remaining: ?]
  COMPLETE: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 450/450 [elapsed: 00:00 remaining: 00:00]
  COMPLETE: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 450/450 [elapsed: 00:02 remaining: 00:00]
  
    0%|          | 0/450 [elapsed: 00:00 remaining: ?]
  SUBMIT:   0%|          | 0/450 [elapsed: 00:00 remaining: ?]
  COMPLETE:   0%|          | 0/450 [elapsed: 00:00 remaining: ?]
  COMPLETE: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 450/450 [elapsed: 00:00 remaining: 00:00]
  COMPLETE: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 450/450 [elapsed: 00:01 remaining: 00:00]

Relevant files

No response

System information

No response

Colabsearch when linking files multiple versions of versions.yml are present

Description of the bug

a db directory is used to collect uniref30 and colabfold_db files, which both contains a versions.yml, because of that the second link command crashes because the files versions.yml exists.

Command used and terminal output

../nextflow run ../proteinfold-custom/main.nf --mode colabfold_local --num_recycle 3 --use_gpu true --db_load_mode 0 --outdir `pwd` --input ../proteinfold-custom/assets/samplesheet.csv -c ../alphafold_run/raoul.config -profile singularity

executor >  local (3)
[94/41589f] process > NFCORE_PROTEINFOLD:COLABFOLD:INPUT_CHECK:SAMPLESHEET_CHECK (samplesheet.csv)                                                                                                        [100%] 1 of 1, cached: 1 โœ”
[02/7b0f30] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:ARIA2_COLABFOLD_PARAMS:ARIA2 (https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar)                          [100%] 1 of 1, cached: 1 โœ”
[02/c342ec] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:ARIA2_COLABFOLD_PARAMS:UNTAR (alphafold_params_2021-07-14.tar)                                                                   [100%] 1 of 1, cached: 1 โœ”
[b3/aad50a] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:ARIA2_COLABFOLD_DB:ARIA2 (http://wwwuser.gwdg.de/~compbiol/colabfold/colabfold_envdb_202108.tar.gz)                              [100%] 1 of 1, cached: 1 โœ”
[3d/31bdb1] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:ARIA2_COLABFOLD_DB:UNTAR (colabfold_envdb_202108.tar.gz)                                                                         [100%] 1 of 1, cached: 1 โœ”
[0b/bf3bb3] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:MMSEQS_TSV2EXPROFILEDB_COLABFOLDDB (colabfold_envdb_202108)                                                                      [100%] 1 of 1 โœ”
[cd/340b4d] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:ARIA2_UNIREF30:ARIA2 (http://wwwuser.gwdg.de/~compbiol/colabfold/uniref30_2103.tar.gz)                                           [100%] 1 of 1, cached: 1 โœ”
[87/13e53e] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:ARIA2_UNIREF30:UNTAR (uniref30_2103.tar.gz)                                                                                      [100%] 1 of 1, cached: 1 โœ”
[9e/bffa3a] process > NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:MMSEQS_TSV2EXPROFILEDB_UNIPROT30 (uniref30_2103)                                                                                 [100%] 1 of 1 โœ”
[67/3bc417] process > NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta]) [100%] 1 of 1, failed: 1 โœ˜
[-        ] process > NFCORE_PROTEINFOLD:COLABFOLD:COLABFOLD_BATCH                                                                                                                                        -
[-        ] process > NFCORE_PROTEINFOLD:COLABFOLD:CUSTOM_DUMPSOFTWAREVERSIONS                                                                                                                            -
[-        ] process > NFCORE_PROTEINFOLD:COLABFOLD:MULTIQC                                                                                                                                                -
Execution cancelled -- Finishing pending tasks before exit
-[nf-core/proteinfold] Pipeline completed with errors-
Error executing process > 'NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta])'

Caused by:
  Process `NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta])` terminated with an error exit status (1)

Command executed:

  ln -r -s colabfold_envdb_202108/* ./db
  ln -r -s uniref30_2103/* ./db
  /colabfold_batch/colabfold-conda/bin/colabfold_search --db-load-mode 0 --threads 1 T1024.fasta ./db "result/"
  cp result/0.a3m T1024_T1.a3m

  cat <<-END_VERSIONS > versions.yml
  "NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH":
      colabfold_search: 1.2.0
  END_VERSIONS

Command exit status:
  1

Command output:
  (empty)

Command error:
  ln: failed to create symbolic link './db/versions.yml': File exists

Work dir:

Relevant files

No response

System information

N E X T F L O W ~ version 22.04.5

Create a local aria2 module

Description of feature

Many of the current bash scripts that are used to download the databases in the DOWNLOAD_AF2_DBS_AND_PARAMS subworkflow share a very similar structure: download with aria2 + some formatting steps. This could be implemented in a single module that will be called several times.

Podman and shifter profiles not working

Description of the bug

When you try and run an alphafold2 job (and presumably colabfold as well) using the conda profile, it always errors out with:

python3: can't open file '/app/alphafold/run_alphafold.py': [Errno 2] No such file or directory

It's almost like it thinks it's running in a container, when it's not.

Command used and terminal output

/global/common/software/m2789/proteinfold/nextflow run nf-core/proteinfold --input $PSCRATCH/fastas.csv --outdir /pscratch/sd/k/kstine/out3 --mode alphafold2 --alphafold2_db /global/cfs/cdirs/m2789/nogaleslab-new/alphafold --full_dbs false --alphafold2_model_preset multimer --use_gpu true -profile conda
N E X T F L O W  ~  version 23.10.1
Launching `https://github.com/nf-core/proteinfold` [adoring_varahamihira] DSL2 - revision: 22a2ada9c2 [master]


------------------------------------------------------
                                        ,--./,-.
        ___     __   __   __   ___     /,-._.--~'
  |\ | |__  __ /  ` /  \ |__) |__         }  {
  | \| |       \__, \__/ |  \ |___     \`-._,-`-,
                                        `._,._,'
  nf-core/proteinfold v1.0.0-g22a2ada
------------------------------------------------------
Core Nextflow options
  revision                        : master
  runName                         : adoring_varahamihira
  launchDir                       : /global/cfs/cdirs/m2789/nogaleslab-new/user-scratch/kstine/alpha
  workDir                         : /global/cfs/cdirs/m2789/nogaleslab-new/user-scratch/kstine/alpha/work
  projectDir                      : /global/homes/k/kstine/.nextflow/assets/nf-core/proteinfold
  userName                        : kstine
  profile                         : conda
  configFiles                     : /global/homes/k/kstine/.nextflow/assets/nf-core/proteinfold/nextflow.config, /global/cfs/cdirs/m2789/nogaleslab-new/user-scratch/kstine/alpha/nextflow.config

Global options
  input                           : /pscratch/sd/k/kstine/fastas.csv
  outdir                          : /pscratch/sd/k/kstine/out3
  use_gpu                         : true

Alphafold2 options
  alphafold2_db                   : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold
  alphafold2_model_preset         : multimer

Alphafold2 DBs and parameters links options
  bfd_path                        : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/bfd/*
  small_bfd_path                  : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/small_bfd/*
  alphafold2_params_path          : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/alphafold_params_*/*
  mgnify_path                     : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/mgnify/*
  pdb70_path                      : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/pdb70/**
  pdb_mmcif_path                  : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/pdb_mmcif/**
  uniclust30_path                 : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/uniclust30/**
  uniref90_path                   : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/uniref90/*
  pdb_seqres_path                 : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/pdb_seqres/*
  uniprot_path                    : /global/cfs/cdirs/m2789/nogaleslab-new/alphafold/uniprot/*

Colabfold DBs and parameters links options
  colabfold_db_path               : null/colabfold_envdb_202108
  uniref30_path                   : null/uniref30_2202
  colabfold_alphafold2_params_path: null/params/alphafold_params_2021-07-14
  colabfold_alphafold2_params_tags: [AlphaFold2-multimer-v1:alphafold_params_colab_2021-10-27, AlphaFold2-multimer-v2:alphafold_params_colab_2022-03-02, AlphaFold2-ptm:alphafold_params_2021-07-14]

!! Only displaying parameters that differ from the pipeline defaults !!
------------------------------------------------------
If you use nf-core/proteinfold for your analysis please cite:

* The nf-core framework
  https://doi.org/10.1038/s41587-020-0439-x

* Software dependencies
  https://github.com/nf-core/proteinfold/blob/master/CITATIONS.md
------------------------------------------------------
WARN: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  There is a problem with your Conda configuration!

  You will need to set-up the conda-forge and bioconda channels correctly.
  Please refer to https://bioconda.github.io/
  The observed channel order is 
  [conda-forge]
  but the following channel order is required:
  [conda-forge, bioconda, defaults]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
executor >  local (2)
[b0/61ac86] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:INPUT_CHECK:SAMPLESHEET_CHECK (fastas.csv)      [100%] 1 of 1 โœ”
[86/9693b8] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1) [  0%] 0 of 1
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:CUSTOM_DUMPSOFTWAREVERSIONS                     -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:MULTIQC                                         -
ERROR ~ Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1)'

executor >  local (2)
[b0/61ac86] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:INPUT_CHECK:SAMPLESHEET_CHECK (fastas.csv)      [100%] 1 of 1 โœ”
[86/9693b8] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1) [100%] 1 of 1, failed: 1 โœ˜
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:CUSTOM_DUMPSOFTWAREVERSIONS                     -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:MULTIQC                                         -
Execution cancelled -- Finishing pending tasks before exit
-[nf-core/proteinfold] Pipeline completed with errors-
ERROR ~ Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1)'

executor >  local (2)
[b0/61ac86] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:INPUT_CHECK:SAMPLESHEET_CHECK (fastas.csv)      [100%] 1 of 1 โœ”
[86/9693b8] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1) [100%] 1 of 1, failed: 1 โœ˜
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:CUSTOM_DUMPSOFTWAREVERSIONS                     -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:MULTIQC                                         -
Execution cancelled -- Finishing pending tasks before exit
-[nf-core/proteinfold] Pipeline completed with errors-
ERROR ~ Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1)'

Caused by:
  Process `NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2 (Periphilin4TASORMPP8complex_T1)` terminated with an error exit status (2)

Command executed:

  if [ -f pdb_seqres/pdb_seqres.txt ]
      then sed -i "/^\w*0/d" pdb_seqres/pdb_seqres.txt
  fi
  if [ -d params/alphafold_params_* ]; then ln -r -s params/alphafold_params_*/* params/; fi
  python3 /app/alphafold/run_alphafold.py         --fasta_paths=Periphilin4TASORMPP8complex.fasta         --model_preset=multimer --pdb_seqres_database_path=./pdb_seqres/pdb_seqres.txt --uniprot_database_path=./uniprot/uniprot.fasta          --db_preset=reduced_dbs --small_bfd_database_path=./small_bfd/bfd-first_non_consensus_sequences.fasta         --output_dir=$PWD         --data_dir=$PWD         --uniref90_database_path=./uniref90/uniref90.fasta         --mgnify_database_path=./mgnify/mgy_clusters_2018_12.fa         --template_mmcif_dir=./pdb_mmcif/mmcif_files         --obsolete_pdbs_path=./pdb_mmcif/obsolete.dat         --random_seed=53343         --use_gpu_relax=true --max_template_date 2020-05-14
  
  cp "Periphilin4TASORMPP8complex"/ranked_0.pdb ./"Periphilin4TASORMPP8complex".alphafold.pdb
  cd "Periphilin4TASORMPP8complex"
  awk '{print $6"\t"$11}' ranked_0.pdb | uniq > ranked_0_plddt.tsv
  for i in 1 2 3 4
      do awk '{print $6"\t"$11}' ranked_$i.pdb | uniq | awk '{print $2}' > ranked_"$i"_plddt.tsv
  done
  paste ranked_0_plddt.tsv ranked_1_plddt.tsv ranked_2_plddt.tsv ranked_3_plddt.tsv ranked_4_plddt.tsv > plddt.tsv
  echo -e Positions"\t"rank_0"\t"rank_1"\t"rank_2"\t"rank_3"\t"rank_4 > header.tsv
  cat header.tsv plddt.tsv > ../"Periphilin4TASORMPP8complex"_plddt_mqc.tsv
  cd ..
  
  cat <<-END_VERSIONS > versions.yml
  "NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2":
      python: $(python3 --version | sed 's/Python //g')
  END_VERSIONS

Command exit status:
  2

Command output:
  (empty)

Command error:
  python3: can't open file '/app/alphafold/run_alphafold.py': [Errno 2] No such file or directory

Work dir:
  /global/cfs/cdirs/m2789/nogaleslab-new/user-scratch/kstine/alpha/work/86/9693b820526728e1f97d8074b34525

Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`

 -- Check '.nextflow.log' file for details

Relevant files

nextflow.log

System information

Version: 23.10.1.5891
Hardware: HPC
Executor: local & slurm
Container engine: conda
OS: SLES 15
nf-core/proteinfold: current master

Add a PREPARE_DB subworkflow

Description of feature

After discussing how currently the DBs were passed to the pipeline in slack with @ewels and @athbaltzis, we came up with the idea that it could make sense to create a subworkflow similar to PREPARE_GENOME that is used in many of genome pipelines.
The PREPARE_DB subworkflow will download the DBs in the case that they are not provided to the pipeline. This way, the first time the pipeline is executed the DBs could be downloaded, if the user do not want to prefetch the DBs before hand. The PREPARE_DB will need two modes one for Alphafold2 and another one for collabfold (or even we could create two different subworkflows) . For the users, it could be a very useful features since this way they won't need to run some steps outside the pipeline to download the DBs before hand.
What do others think (@ewels, @drpatelh, @athbaltzis others)?

PDBMMCIF download error

Description of the bug

I get the following error on two different clusters:
Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:DOWNLOAD_PDBMMCIF'

Any clue of what could be wrong ? is it on my side ?

Command used and terminal output

> nextflow run proteinfold-dev --input sample_alpha.csv --outdir alpha_nfcore --mode alphafold2 -profile genotoul

[d0/00f014] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:INPUT_CHECK:SAMPLESHEET_CHECK (sample_alpha.csv)                                                               [100%] 1 of 1 โœ”
[d5/d4257e] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_SMALL_BFD:ARIA2 (https://storage.googleapis.com/alphafold-databases/reduced_db... [100%] 1 of 1 โœ”
[8e/17eac7] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_SMALL_BFD:GUNZIP (bfd-first_non_consensus_sequences.fasta.gz)                     [100%] 1 of 1 โœ”
[9b/5e26d3] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_ALPHAFOLD2_PARAMS:ARIA2 (https://storage.googleapis.com/alphafold/alphafold_pa... [100%] 1 of 1 โœ”
[3e/02e21d] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_ALPHAFOLD2_PARAMS:UNTAR (alphafold_params_2022-12-06.tar)                         [100%] 1 of 1 โœ”
[9e/a083fa] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_MGNIFY:ARIA2 (https://storage.googleapis.com/alphafold-databases/casp14_versio... [100%] 1 of 1 โœ”
[92/a3aa7e] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_MGNIFY:GUNZIP (mgy_clusters_2018_12.fa.gz)                                        [100%] 1 of 1 โœ”
[9f/8fa722] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_PDB70:ARIA2 (http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_d... [100%] 1 of 1 โœ”
[93/cc6f2c] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_PDB70:UNTAR (pdb70_from_mmcif_200916.tar.gz)                                      [100%] 1 of 1 โœ”
[ab/7e2d9d] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:DOWNLOAD_PDBMMCIF                                                                       [100%] 3 of 3, failed: 3..
[ba/1f9c09] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIREF30:ARIA2 (https://storage.googleapis.com/alphafold-databases/v2.3/UniRef... [100%] 1 of 1 โœ”
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIREF30:UNTAR (UniRef30_2021_03.tar.gz)                                          -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIREF90:ARIA2 (ftp://ftp.uniprot.org/pub/databases/uniprot/uniref/uniref90/un... -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIREF90:GUNZIP                                                                   -
[70/9b8eb5] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2 (ftp://ftp.wwpdb.org/pub/pdb/derived_data/pdb_seqres.txt)                         [100%] 1 of 1 โœ”
[ff/ff29b9] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIPROT_SPROT:ARIA2 (ftp://ftp.ebi.ac.uk/pub/databases/uniprot/current_release... [100%] 1 of 1 โœ”
[c2/4a3af6] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIPROT_SPROT:GUNZIP (uniprot_sprot.fasta.gz)                                     [100%] 1 of 1 โœ”
[3f/6e0d5e] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIPROT_TREMBL:ARIA2 (ftp://ftp.ebi.ac.uk/pub/databases/uniprot/current_releas... [100%] 1 of 1 โœ”
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:ARIA2_UNIPROT_TREMBL:GUNZIP (uniprot_trembl.fasta.gz)                                   -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:COMBINE_UNIPROT                                                                         -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_ALPHAFOLD2                                                                                                 -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:CUSTOM_DUMPSOFTWAREVERSIONS                                                                                    -
[-        ] process > NFCORE_PROTEINFOLD:ALPHAFOLD2:MULTIQC                                                                                                        -
Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:DOWNLOAD_PDBMMCIF'

Caused by:
  Process `NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:DOWNLOAD_PDBMMCIF` terminated with an error exit status (12)

Command executed:

  set -euo pipefail
  
  mkdir raw
  
  rsync \
      --recursive \
      --links \
      --perms \
      --times \
      --compress \
      --info=progress2 \
      --delete \
      --port=33444 \
      rsync.rcsb.org::ftp_data/structures/divided/mmCIF/ \
      raw
  
  echo "Unzipping all mmCIF files..."
  find ./raw -type f -iname "*.gz" -exec gunzip {} +
  
  echo "Flattening all mmCIF files..."
  mkdir mmcif_files
  find ./raw -type d -empty -delete  # Delete empty directories.
  for subdir in ./raw/*; do
      mv "${subdir}/"*.cif ./mmcif_files
  done
  
  # Delete empty download directory structure.
  find ./raw -type d -empty -delete
  
  aria2c \
      ftp://ftp.wwpdb.org/pub/pdb/data/status/obsolete.dat
  
  cat <<-END_VERSIONS > versions.yml
  "NFCORE_PROTEINFOLD:ALPHAFOLD2:PREPARE_ALPHAFOLD2_DBS:DOWNLOAD_PDBMMCIF":
      sed: $(echo $(sed --version 2>&1) | head -1 | sed 's/^.*GNU sed) //; s/ .*$//')
      rsync: $(rsync --version | head -1 | sed 's/^rsync  version //; s/  protocol version [[:digit:]]*//')
      aria2c: $( aria2c -v | head -1 | sed 's/aria2 version //' )
  END_VERSIONS

Command exit status:
  12

Command output:
  (empty)

Command error:
  rsync: [Receiver] safe_read failed to read 1 bytes: Connection timed out (110)
  rsync error: error in rsync protocol data stream (code 12) at io.c(282) [Receiver=3.2.7]

Work dir:
  /work/project/ijpb/nbouche/work/ab/7e2d9dff49bf09f2f875bfab133e75

Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`

Relevant files

No response

System information

No response

Add TCRdock for TCR:pMHC prediction

Description of feature

Hey there!
As part of my bachelors thesis, I am working on integrating TCR:pMHC structure prediction using the TCRdock tool into the nf-core/proteinfold pipeline.
This is a specialized AlphaFold-based tool for the structure prediction of TCR:pMHC complexes.
I am nearly done with the integration and it would be awesome to add the functionality to the official proteinfold pipeline for others to use. I will create a PR shortly.

Best,
Elias

Parse failed with pdb_seqres.txt

Description of the bug

This is an issued #560 (now closed) in alphafold

Parse failed (sequence file /mnt/pdb_seqres_database_path/pdb_seqres.txt):
Line N: illegal character 0

where N is the line number, many lines are affected by this bug (at the time of writing 4 )

Fixing manually the file using the regex (/^\w*0) worked, this could be automated when downloading the database. To note: the database was downloaded using the official script download_pdb_seqres.sh

Command used and terminal output

No response

Relevant files

pdb_seqres_database_path/pdb_seqres.txt

System information

No response

colabfold, MMSEQS_TSV2EXPROFILEDB creation of versions.yml in the wrong place

Description of the bug

just trying to run colabfold locally using the assets/samplesheet.csv

The config file below (raoul.config) it is just for setting the GPU and extend the time for some process which took more than 4h.

Command used and terminal output

$ ../nextflow run ../proteinfold-custom/main.nf --mode colabfold_local --num_recycle 3 --use_gpu true --db_load_m
ode 0 --outdir `pwd` --input ../proteinfold-custom/assets/samplesheet.csv -c ../alphafold_run/raoul.config -profile singularity -resume
Launching `../proteinfold-custom/main.nf` [sad_franklin] DSL2 - revision: 4ef0649189

OUTPUT

Error executing process > 'NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:MMSEQS_TSV2EXPROFILEDB_UNIPROT30 (uniref30_2103)'

Caused by:
  Missing output file(s) `versions.yml` expected by process `NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:MMSEQS_TSV2EXPROFILEDB_UNIPROT30 (uniref30_2103)`

Command executed:

  cd uniref30_2103
  mmseqs tsv2exprofiledb \
      "uniref30_2103" \
      "uniref30_2103_db"

  cat <<-END_VERSIONS > versions.yml
  "NFCORE_PROTEINFOLD:COLABFOLD:PREPARE_COLABFOLD_DBS:MMSEQS_TSV2EXPROFILEDB_UNIPROT30":
      mmseqs: $(mmseqs | grep 'Version' | sed 's/MMseqs2 Version: //')
  END_VERSIONS

Command exit status:
  0

Command output:
  tsv2exprofiledb uniref30_2103 uniref30_2103_db

  MMseqs Version:       b0b8e85f3b8437c10a666e3ea35c78c0ad0d7ec2
  Verbosity     3

Work dir:

Relevant files

No response

System information

N E X T F L O W ~ version 22.04.5

Get pipeline working on AWS

Description of feature

The purpose of this issue is to document the Nextflow/Nextflow Tower settings required to run the pipeline on AWS Batch. As the pipeline matures we can finesse these requirements but it would be good to periodically dump the 2 settings I have outlined below.

If using Tower, you can install and set-up the Tower CLI with the instructions here:

  1. Dump JSON file with settings required to create the AWS Compute Environment
tw compute-envs export --name=<COMPUTE_ENV_NAME> --workspace=<ORGANISATION>/<WORKSPACE>
  1. Dump JSON file with settings required to create the Pipeline in the Launchpad
tw pipelines export --name=<PIPELINE_NAME> --workspace=<ORGANISATION>/<WORKSPACE>

Add OpenFold workflow

Description of feature

Integrate OpenFold into the nf-core/proteinfold pipeline. OpenFold is a faithful but trainable PyTorch reproduction of AlphaFold2.

OpenFold has the following advantages over the reference implementation:

  • Faster inference on GPU, sometimes by as much as 2x. The greatest speedups are achieved on (>= Ampere) GPUs.
    Inference on extremely long chains, made possible by our implementation of low-memory attention (Rabe & Staats 2021). OpenFold can predict the structures of sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
  • Custom CUDA attention kernels modified from FastFold's kernels support in-place attention during inference and training. They use 4x and 5x less GPU memory than equivalent FastFold and stock PyTorch implementations, respectively.
  • Efficient alignment scripts using the original AlphaFold HHblits/JackHMMER pipeline or ColabFold's, which uses the faster MMseqs2 instead.
  • FlashAttention support greatly speeds up MSA attention.

Source code: https://github.com/aqlaboratory/openfold
Publication: https://www.biorxiv.org/content/10.1101/2022.11.20.517210v2

when colabfold_db is initialized no params directory is created

Description of the bug

running the pipeline without specifying a --colabfold_db starts the creation of the colab database, which downloads the data with aria2c and the performs the untar and later the indexing.
In the specific case of the alphafold_params db it download the files, untar and publish them into the --output directory DBs/colabfold_local/
When the user, later, wants to use the data must specify the --colabfold_db withDBs/colabfold_local/. The prepare_colab_fold expects a DBs/colabfold_local/params/ which does not exists.

the params sub-directory is also used by the af2 workflow.

In the mean time, creating a directory and then a symlink to the alphafold_params_yyyy-mm-dd dir.

It is also strongly suggested to change the symlink to copy in the colabfold_local UNTAR process

Command used and terminal output

No response

Relevant files

No response

System information

No response

SIF files causing AlphaFold container to increase by 7-8GB and causing failures/increased pull costs.

Description of the bug

One container this analysis relies on, quay.io/repository/nf-core/proteinfold_alphafold2_standard, has unintentionally copied over a large amount of SIF files, likely legacy from a singularity build. This bloats the container by 8-9GB, causing failures with container pulls to ECS due to large file sizes. For cases where customers have increased the default volume size, it can incur extra compute costs due to NAT Gateway traversal costs.

The offending layer looks to be COPY . /app/alphafold # buildkit, which suggests that the copy command on the build was not from a cleaned up folder prior to the copy.

root@a201dd180cd5:/app/alphafold# du -hs * | grep "G"
4.0K    CONTRIBUTING.md
1.3G    deepeca.tar.gz
2.4G    dmpfold.sif
2.5G    dmpfold_crg.sif
2.4G    test.sif

Full Docker history for validation:

Admin:~/environment/alphafold $ docker history quay.io/nf-core/proteinfold_alphafold2_standard:1.1.0
IMAGE          CREATED         CREATED BY                                      SIZE      COMMENT
5438ab648f21   9 months ago    RUN /bin/bash -o pipefail -c ldconfig # builโ€ฆ   33.4kB    buildkit.dockerfile.v0
<missing>      9 months ago    RUN /bin/bash -o pipefail -c cd /app/alphafoโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      9 months ago    RUN /bin/bash -o pipefail -c chmod u+s /sbinโ€ฆ   1.03MB    buildkit.dockerfile.v0
<missing>      9 months ago    RUN /bin/bash -o pipefail -c sed -i "s|alphaโ€ฆ   35kB      buildkit.dockerfile.v0
<missing>      9 months ago    RUN /bin/bash -o pipefail -c pip3 install --โ€ฆ   1.75GB    buildkit.dockerfile.v0
<missing>      9 months ago    RUN /bin/bash -o pipefail -c wget -q -P /appโ€ฆ   9.29kB    buildkit.dockerfile.v0
<missing>      9 months ago    COPY . /app/alphafold # buildkit                9.64GB    buildkit.dockerfile.v0
<missing>      9 months ago    RUN /bin/bash -o pipefail -c /conda/bin/condโ€ฆ   2.47GB    buildkit.dockerfile.v0
<missing>      10 months ago   RUN /bin/bash -o pipefail -c wget -q -P /tmpโ€ฆ   322MB     buildkit.dockerfile.v0
<missing>      10 months ago   RUN /bin/bash -o pipefail -c git clone --braโ€ฆ   93.2MB    buildkit.dockerfile.v0
<missing>      10 months ago   RUN /bin/bash -o pipefail -c git clone httpsโ€ฆ   34.8MB    buildkit.dockerfile.v0
<missing>      10 months ago   RUN /bin/bash -o pipefail -c apt-get update โ€ฆ   627MB     buildkit.dockerfile.v0
<missing>      10 months ago   RUN /bin/bash -o pipefail -c apt-key adv --kโ€ฆ   3.43kB    buildkit.dockerfile.v0
<missing>      10 months ago   ENV PATH=/conda/bin:/usr/local/nvidia/bin:/uโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      10 months ago   ENV LD_LIBRARY_PATH=/conda/lib:/usr/local/cuโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      10 months ago   SHELL [/bin/bash -o pipefail -c]                0B        buildkit.dockerfile.v0
<missing>      10 months ago   LABEL authors=Athanasios Baltzis, Leila Mansโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get uโ€ฆ   1.53GB    buildkit.dockerfile.v0
<missing>      2 years ago     LABEL com.nvidia.cudnn.version=8.2.4.15         0B        buildkit.dockerfile.v0
<missing>      2 years ago     LABEL maintainer=NVIDIA CORPORATION <cudatooโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ARG TARGETARCH                                  0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_CUDNN_PACKAGE_NAME=libcudnn8             0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_CUDNN_PACKAGE=libcudnn8=8.2.4.15-1+cuโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_CUDNN_VERSION=8.2.4.15                   0B        buildkit.dockerfile.v0
<missing>      2 years ago     RUN |1 TARGETARCH=amd64 /bin/sh -c apt-mark โ€ฆ   258kB     buildkit.dockerfile.v0
<missing>      2 years ago     RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get uโ€ฆ   2.07GB    buildkit.dockerfile.v0
<missing>      2 years ago     LABEL maintainer=NVIDIA CORPORATION <cudatooโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ARG TARGETARCH                                  0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBNCCL_PACKAGE=libnccl2=2.11.4-1+cudโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NCCL_VERSION=2.11.4-1                       0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBNCCL_PACKAGE_VERSION=2.11.4-1         0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2            0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBCUBLAS_PACKAGE=libcublas-11-4=11.6โ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBCUBLAS_VERSION=11.6.1.51-1            0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-4    0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBCUSPARSE_VERSION=11.6.0.120-1         0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBNPP_PACKAGE=libnpp-11-4=11.4.0.110โ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_LIBNPP_VERSION=11.4.0.110-1              0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_NVTX_VERSION=11.4.120-1                  0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_CUDA_LIB_VERSION=11.4.2-1                0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NVIDIA_DRIVER_CAPABILITIES=compute,utiliโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NVIDIA_VISIBLE_DEVICES=all                  0B        buildkit.dockerfile.v0
<missing>      2 years ago     COPY NGC-DL-CONTAINER-LICENSE / # buildkit      16kB      buildkit.dockerfile.v0
<missing>      2 years ago     ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/uโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV PATH=/usr/local/nvidia/bin:/usr/local/cuโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     RUN |1 TARGETARCH=amd64 /bin/sh -c echo "/usโ€ฆ   46B       buildkit.dockerfile.v0
<missing>      2 years ago     RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get uโ€ฆ   36.8MB    buildkit.dockerfile.v0
<missing>      2 years ago     ENV CUDA_VERSION=11.4.2                         0B        buildkit.dockerfile.v0
<missing>      2 years ago     RUN |1 TARGETARCH=amd64 /bin/sh -c apt-get uโ€ฆ   16.5MB    buildkit.dockerfile.v0
<missing>      2 years ago     LABEL maintainer=NVIDIA CORPORATION <cudatooโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ARG TARGETARCH                                  0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-4     0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NV_CUDA_CUDART_VERSION=11.4.108-1           0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NVIDIA_REQUIRE_CUDA=cuda>=11.4 brand=tesโ€ฆ   0B        buildkit.dockerfile.v0
<missing>      2 years ago     ENV NVARCH=x86_64                               0B        buildkit.dockerfile.v0
<missing>      2 years ago     /bin/sh -c #(nop)  CMD ["bash"]                 0B        
<missing>      2 years ago     /bin/sh -c #(nop) ADD file:2aa3e79e3cff3c048โ€ฆ   63.1MB    

Apologies is the wrong place...was looking for the Dockerfile to comment on. Happy to talk through it with someone on Slack as well.

Command used and terminal output

No response

Relevant files

No response

System information

No response

Database uniref30_2202_db does not exist while running colabfold local

Description of the bug

The python code search.py cannot find uniref30_2202_db even though I already specified the path. Maybe I lost something important? Thanks a lot in advance.

ERROR ~ Error executing process > 'NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH (T478K_T1)'
Caused by:
Process NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH (T478K_T1) terminated with an error exit status (1)
Traceback (most recent call last):
File "/colabfold_batch/colabfold-conda/bin/colabfold_search", line 8, in
sys.exit(main())
File "/colabfold_batch/colabfold-conda/lib/python3.7/site-packages/colabfold/mmseqs/search.py", line 444, in main
threads=args.threads,
File "/colabfold_batch/colabfold-conda/lib/python3.7/site-packages/colabfold/mmseqs/search.py", line 66, in mmseqs_search_monomer
raise FileNotFoundError(f"Database {db} does not exist")
FileNotFoundError: Database uniref30_2202_db does not exist

Command used and terminal output

nextflow -bg run nf-core/proteinfold --input samplesheet.csv --outdir ./ --mode colabfold --colabfold_server local --num_recycle 3 --use_amber false --colabfold_model_preset AlphaFold2-ptm --use_gpu false --db_load_mode 0 -profile docker --colabfold_db_path ./colabfold_envdb_202108 --uniref30_path ./uniref30_2202_db --colabfold_alphafold2_params_path ./alphafold_params_2021-07-14 --max_memory 80.GB --colabfold_db ./

Relevant files

No response

System information

No response

Issue with excessive symlinking

Description of the bug

We want to run proteinfold on our cluster, where we already have the AlphaFold data. However, using passing the --alphafold2_db flag to point to this data, proteinfold tries to symlink in the thousands of files located in that directory tree. This causes obvious issues.

Detailed Explanation

  • Firstly, I believe that the parameters relating to the alphafold database are not explained in enough detail. What files need to be in each path? What directory structure? Should the databases be zipped or unzipped? Knowing this would allow us to diagnose this issue better
  • This issue manifested itself as Command output: sbatch: error: Batch job submission failed: Pathname of a file, directory or other parameter too long, because the excessive symlinking resulting in a 20Mb sbatch script. On other environments it would manifest differently

Command used and terminal output

`nextflow run nf-core/proteinfoldย  --input samples.csvย  --outdir ./output --mode alphafold2 --alphafold2_db /vast/projects/alphafold/databases --full_dbs=true --alphafold2_model_preset monomer --use_gpu=true -profile wehi`

Relevant files

The .command.run file that causes the issues:
command.zip

System information

  • Nextflow 22.10.4
  • HPC
  • Slurm executor
  • Singularity engine
  • CentOS 7 OS
  • proteinfold 1.0.0

Add Omegafold workflow

Description of feature

OmegaFold is a new approach for predicting high-resolution protein structure from a single primary sequence alone without the use of multiple sequence alignments. It is a good candidate for inclusion in the ProteinFold pipeline.

Publication

Repository

Add capability for localcolabfold to run vs a locally hosted database (FSx Lustre or local web-server)

Description of feature

Currently, the localcolabfold.nf script is relying on a colabfold_batch command against the public web server hosted by the ColabFold developers (https://github.com/sokrypton/ColabFold) . This may not be obvious to first-time users of colabfold, because the workflow is checking for af2 native database files in the database path. The workflow does not use the database files if the mode is specified as colabfold.

The issue with using the public web server is that it is a limited resource. After submitting anywhere from 10-100 jobs, additional jobs get killed by the web server (see attached file "command-2.err.txt" command-2.err.txt) and IP addresses can remain blocked for some time (1-2 days). This is necessary to keep the webserver operational and serving the community.

A scalable solution for running colabfold would be highly beneficial to all nf-core/proteinfold users. Two options exist:

Solution 1: querying against a locally-installed colabfold db. The colabfold sequence dbs, which are distinct from the af2 native sequence dbs due to the fact colabfold uses mmseqs2 rather than jackhmmer for MSA generation, could be installed locally on an FSx Lustre filesystem. AWS developers have provided such a solution using FSx . NOTE this would also required the Issue "Split colabfold_batch command in localcolabfold.nf into two separate, colabfold_search + colabfold_batch, commands" to be resolved, since colabfold_batch will not run against a file system, only a webserver.

Solution 2: Provide an option for users to pass their own, locally hosted webserver url to the workflow. The local webserver can be set up following instructions here .

Of the two solutions, Solution 1 is preferable for most users. This is because it only requires that a database be maintained in a file system. Solution 2 would require a webserver to be running constantly, or to be configured and launched as part of the pipeline execution (overly complex). The webserver requires at least 2 TB of RAM, which amounts to a considerable overhead if running constantly.

Split colabfold_batch command in localcolabfold.nf into two separate, colabfold_search + colabfold_batch, commands

Description of feature

This is related to issue #19, "Add capability for localcolabfold to run vs a locally-hosted database (FSx Lustre or webserver)". This issue partially blocks that issue, in the case where that issue is resolved through the FSx Lustre approach.

A version of the workflow should exists, calling an alternative / variant of /modules/local/localcolabfold.nf , wherein the exectuable calls to "colabfold_batch" (lines 23 and 39 in current nextflow script) should each be split into two distinct executable calls, "colabfold_search" and then "colabfold_batch".

"colabfold_search" takes an input .fasta sequence and queries it against a locally installed colabfold database, generating an MSA as output.

"colabfold_batch" takes the MSA output generated by "colabfold_search" and runs structural inference on it.

This split is necessary to be able to run a query using a locally-installed database, rather than against a webserver.

ColabSearch input db oder is inverted

Description of the bug

The order of the colabsearch parameters found in
https://github.com/nf-core/proteinfold/blob/dev/workflows/colabfold.nf#L132

https://github.com/nf-core/proteinfold/blob/dev/workflows/colabfold.nf#L141

is inverted compare to the definition of the process/module.

Command used and terminal output

Error executing process > 'NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta])'

Caused by:
  Process `NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta])` terminated with an error exit status (1)

Command executed:

  ln -r -s colabfold_envdb_202108/uniref30_* ./db
  ln -r -s uniref30_2103/colabfold_envdb* ./db
  /colabfold_batch/colabfold-conda/bin/colabfold_search --db-load-mode 0 --threads 1 T1024.fasta ./db "result/"
  cp result/0.a3m T1024_T1.a3m

  cat <<-END_VERSIONS > versions.yml
  "NFCORE_PROTEINFOLD:COLABFOLD:MMSEQS_COLABFOLDSEARCH":
      colabfold_search: 1.2.0
  END_VERSIONS

Command exit status:
  1

Relevant files

No response

System information

No response

Bug when manually specifying the path to AF2 dbs and params

Description of the bug

When the AF2 dbs and params have been downloaded manually and you specify the corresponding path with --af2_db parameter, the pipeline crashes due to creation of an additional layer of directories for each database.

Command used and terminal output

No response

Relevant files

No response

System information

No response

RuntimeError: HHblits failed. Unrecognized HMM file format in '468479486'

Description of the bug

Hi,
We are trying to run the nfcore proteinfold pipeline in ec2. While running the alphafold, we are getting an error message saying HHblits failed. It comes from the python file "/app/alphafold/alphafold/data/tools/hhblits.py" which is in the nf-core / proteinfold_alphafold2_standard container. We are using the dev version of the container (https://quay.io/repository/nf-core/proteinfold_alphafold2_standard?tab=tags).

The error details :
Command error:
File "/app/alphafold/alphafold/data/pipeline_multimer.py", line 264, in process
chain_features = self._process_single_chain(
File "/app/alphafold/alphafold/data/pipeline_multimer.py", line 212, in _process_single_chain
chain_features = self._monomer_data_pipeline.process(
File "/app/alphafold/alphafold/data/pipeline.py", line 215, in process
hhblits_bfd_uniref_result = run_msa_tool(
File "/app/alphafold/alphafold/data/pipeline.py", line 96, in run_msa_tool
result = msa_runner.query(input_fasta_path)[0]
File "/app/alphafold/alphafold/data/tools/hhblits.py", line 143, in query
raise RuntimeError('HHblits failed\nstdout:\n%s\n\nstderr:\n%s\n' % (
RuntimeError: HHblits failed
stdout:

stderr:

08:13:33.989 INFO: Searching 65983866 column state sequences.

08:13:35.413 INFO: Searching 29291635 column state sequences.

08:13:35.474 INFO: /tmp/tmppquf5va_.fasta is in A2M, A3M or FASTA format

08:13:35.474 INFO: Iteration 1

08:13:35.534 INFO: Prefiltering database

08:15:52.049 INFO: HMMs passed 1st prefilter (gapless profile-profile alignment) : 1212889

08:19:04.295 INFO: HMMs passed 1st prefilter (gapless profile-profile alignment) : 497133

08:19:05.660 INFO: HMMs passed 2nd prefilter (gapped profile-profile alignment) : 2000

08:19:05.660 INFO: HMMs passed 2nd prefilter and not found in previous iterations : 2000

08:19:05.660 INFO: Scoring 2000 HMMs using HMM-HMM Viterbi alignment

08:19:05.838 INFO: Alternative alignment: 0

08:19:05.840 ERROR: In /tmp/hh-suite/src/hhdatabase.cpp:443: getTemplateHMM:

08:19:05.840 ERROR: Unrecognized HMM file format in '468479486'.

08:19:05.840 ERROR: Context:
'DEFGEARMVCGGVSADDFSLSADHSSAGEAAASGGWTVVPGGGFGAGGEKALTTGDTEVH-----------------RETSLCVIASE-----SSASKS---------------------------------

08:19:05.840 ERROR: >SRR5208283_2706711

08:19:05.840 ERROR: --------------------------------------------------------------VELHRMPLPRAGLPLAARAGAQLFRHLLQPSGRTHFADLFPAGRYAALRCSPLDDLGAAGLVCGGIPPDRLALSSDHSRAGAAAASAGWAVVPGGMPGAGGEEELTTGTPRNTG----------------IHRA---------------------------------------------------'

Please help us resolve this issue. Thanks in advance

Command used and terminal output

No response

Relevant files

nextflow.log.log

System information

Nextflow version: 23.10.1
Version of nf-core/proteinfold: dev
Container engine: Docker

Make work the test profile using the stub run option

Description of feature

Since it seems rather difficult, if even possible, to create a suitable small data set to run the test profile, the idea would be to implement all the needed bits to make it possible to run the pipeline using the -profile test option using Nextflow stub run option. For this we will need to modify the pipeline accordingly to add a stub section in any of the process that can not be run by the lack of the real input data and modify the test.config to provide a suitable configuration for the stub run. Finally, the implementation should be able to run the pipeline in any of the two current available modes (AF2 and colabfold).

Populate multiqc report

Description of feature

We will need to work out what would be worth reporting on the multiqc report of the pipeline. We can list them below:

  • Prediction accuracy
  • ...

[colabfold] supporting multiple params directory when --mode is colabfold_local

Description of the bug

If the user has multiple params in the local database directory, the pipeline detects the multiple prams directories and passes them to the downstream processes. The COLABFOLD_BATCH process detect multiple items in the input channel and it creates ex. params1 and params2 dir links which may cause some issue.

The issue may arise when ch_params is prepared by prepare_colabfold_dbs

I am going to provide a PR which solves this issue.

Command used and terminal output

No response

Relevant files

No response

System information

No response

Update docs and README

Description of the bug

Update the files in the docs folder (including some use cases) and the README file

Command used and terminal output

No response

Relevant files

No response

System information

No response

Colabfold database files missing

Description of the bug

Hi,

I am trying to run proteinfold pipeline with colabfold mode. I am having the supporting db files in s3 and used the colabfold_db param in nextflow.config file to update the s3 URI. I downloaded the db files using the link given in conf/dbs.config which is

colabfold_db_link       = 'http://wwwuser.gwdg.de/~compbiol/colabfold/colabfold_envdb_202108.tar.gz'
uniref30_colabfold_link = 'https://wwwuser.gwdg.de/~compbiol/colabfold/uniref30_2202.tar.gz'

I downloaded the above links and uploaded the untarred files in s3. But, when we run the pipeline, I am getting an error saying
"FileNotFoundError: Database uniref30_2202_db does not exist"

while exploring this, we came across the folders and files structure given by nf-core (https://nf-co.re/proteinfold/dev/docs/usage) which is
image

My question is, where can we download all these files mentioned in the above link? And why does the link mentioned in the dbs.config file not fetching all of these files?

Command used and terminal output

No response

Relevant files

No response

System information

No response

Add background to tubemap svg

Description of the bug

At the moment, the tubemap render on the nf-core website is difficult to read due to a lack of background colour:

image

It would be good to add a white background to ensure that the pipeline looks great on nf-co.re

Command used and terminal output

No response

Relevant files

No response

System information

No response

Get rid of custom configuration

Description of feature

Right now we are using some custom config to run the pipeline e.g. here.
It will be nice to strip out these parts from the pipeline and maybe create an institutional config for it.

colabfold - No module named 'openmm'

Description of the bug

Error with colabfold

Command used and terminal output

nextflow run proteinfold-dev --input sample_alpha.csv --outdir colabfold_test_multi --use_gpu true --mode colabfold --colabfold_server webserver --num_recycle 3 --colabfold_model_preset alphafold2_multimer_v3 -profile ifb_core

Input file 'sample_alpha.csv' contains two entries
sequence,fasta
prot1,/shared/prot1.fasta
prot2,/shared/prot2.fasta
__________

Traceback (most recent call last):
    File "/localcolabfold/colabfold-conda/bin/colabfold_batch", line 8, in <module>
      sys.exit(main())
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 1771, in main
      run(
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 1449, in run
      results = predict_structure(
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 529, in predict_structure
      pdb_lines = relax_me(pdb_lines=unrelaxed_pdb_lines[key], use_gpu=use_gpu_relax)
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 300, in relax_me
      from alphafold.relax import relax
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/alphafold/relax/relax.py", line 18, in <module>
      from alphafold.relax import amber_minimize
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/alphafold/relax/amber_minimize.py", line 25, in <module>
      from alphafold.relax import cleanup
    File "/localcolabfold/colabfold-conda/lib/python3.9/site-packages/alphafold/relax/cleanup.py", line 23, in <module>
      from openmm import app
  ModuleNotFoundError: No module named 'openmm'

Relevant files

No response

System information

No response

Files missing for pdb_mmcif after download

Description of the bug

After fixing the download problem by using the --check-certificate=false parameter for aria2c, I receive the following error.

There seem to be located many files at af2_db/pdb_mmcif/mmcif_files/ though:

[cloudshell-user@ip-... ~]$ aws s3 ls s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/ | head
2022-08-30 06:33:42          0 
2022-08-30 06:34:10      83586 100d.cif
2022-08-30 06:34:10      96722 101d.cif
2022-08-30 06:34:11     176417 101m.cif
2022-08-30 06:34:11      93108 102d.cif
2022-08-30 06:34:11     174518 102l.cif
2022-08-30 06:34:11     175372 102m.cif
2022-08-30 06:34:12      94420 103d.cif
2022-08-30 06:34:12     172312 103l.cif
2022-08-30 06:34:12     171841 103m.cif

Can you suggest a fix for that?
Could it be that the download failed for some reason? How can I retrigger the download?

Command used and terminal output

Error executing process > 'NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_AF2 ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta])'

Caused by:
  Process `NFCORE_PROTEINFOLD:ALPHAFOLD2:RUN_AF2 ([sequence:T1024_T1, fasta:https://raw.githubusercontent.com/nf-core/test-datasets/proteinfold/testdata/sequences/T1024.fasta])` terminated with an error exit status (1)

Command executed:

  python3 /app/alphafold/run_alphafold.py         --fasta_paths=T1024.1.fasta         --max_template_date=2020-05-14         --model_preset=monomer --pdb70_database_path=af2_db/pdb70/pdb70          --db_preset=reduced_dbs --small_bfd_database_path=af2_db/small_bfd/bfd-first_non_consensus_sequences.fasta         --output_dir=$PWD         --data_dir=af2_db         --uniref90_database_path=af2_db/uniref90/uniref90.fasta         --mgnify_database_path=af2_db/mgnify/mgy_clusters_2018_12.fa         --template_mmcif_dir=af2_db/pdb_mmcif/mmcif_files         --obsolete_pdbs_path=af2_db/pdb_mmcif/obsolete.dat         --random_seed=53343         --use_gpu_relax=true
  
  cp "T1024.1"/ranked_0.pdb ./"T1024.1".alphafold.pdb

Command exit status:
  1

Command output:
  (empty)

Command wrapper:
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1php.cif to af2_db/pdb_mmcif/mmcif_files/1php.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1phq.cif to af2_db/pdb_mmcif/mmcif_files/1phq.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1phr.cif to af2_db/pdb_mmcif/mmcif_files/1phr.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1phs.cif to af2_db/pdb_mmcif/mmcif_files/1phs.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pht.cif to af2_db/pdb_mmcif/mmcif_files/1pht.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1phw.cif to af2_db/pdb_mmcif/mmcif_files/1phw.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1phz.cif to af2_db/pdb_mmcif/mmcif_files/1phz.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi1.cif to af2_db/pdb_mmcif/mmcif_files/1pi1.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi2.cif to af2_db/pdb_mmcif/mmcif_files/1pi2.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi3.cif to af2_db/pdb_mmcif/mmcif_files/1pi3.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi4.cif to af2_db/pdb_mmcif/mmcif_files/1pi4.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi5.cif to af2_db/pdb_mmcif/mmcif_files/1pi5.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi6.cif to af2_db/pdb_mmcif/mmcif_files/1pi6.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi7.cif to af2_db/pdb_mmcif/mmcif_files/1pi7.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pi8.cif to af2_db/pdb_mmcif/mmcif_files/1pi8.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pib.cif to af2_db/pdb_mmcif/mmcif_files/1pib.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pic.cif to af2_db/pdb_mmcif/mmcif_files/1pic.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pid.cif to af2_db/pdb_mmcif/mmcif_files/1pid.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pie.cif to af2_db/pdb_mmcif/mmcif_files/1pie.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pif.cif to af2_db/pdb_mmcif/mmcif_files/1pif.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pig.cif to af2_db/pdb_mmcif/mmcif_files/1pig.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pih.cif to af2_db/pdb_mmcif/mmcif_files/1pih.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pii.cif to af2_db/pdb_mmcif/mmcif_files/1pii.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pij.cif to af2_db/pdb_mmcif/mmcif_files/1pij.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pik.cif to af2_db/pdb_mmcif/mmcif_files/1pik.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pil.cif to af2_db/pdb_mmcif/mmcif_files/1pil.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pim.cif to af2_db/pdb_mmcif/mmcif_files/1pim.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pin.cif to af2_db/pdb_mmcif/mmcif_files/1pin.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pio.cif to af2_db/pdb_mmcif/mmcif_files/1pio.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pip.cif to af2_db/pdb_mmcif/mmcif_files/1pip.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1piq.cif to af2_db/pdb_mmcif/mmcif_files/1piq.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pir.cif to af2_db/pdb_mmcif/mmcif_files/1pir.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pis.cif to af2_db/pdb_mmcif/mmcif_files/1pis.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pit.cif to af2_db/pdb_mmcif/mmcif_files/1pit.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1piu.cif to af2_db/pdb_mmcif/mmcif_files/1piu.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1piv.cif to af2_db/pdb_mmcif/mmcif_files/1piv.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1piw.cif to af2_db/pdb_mmcif/mmcif_files/1piw.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pix.cif to af2_db/pdb_mmcif/mmcif_files/1pix.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1piy.cif to af2_db/pdb_mmcif/mmcif_files/1piy.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj0.cif to af2_db/pdb_mmcif/mmcif_files/1pj0.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1piz.cif to af2_db/pdb_mmcif/mmcif_files/1piz.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj1.cif to af2_db/pdb_mmcif/mmcif_files/1pj1.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj2.cif to af2_db/pdb_mmcif/mmcif_files/1pj2.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj3.cif to af2_db/pdb_mmcif/mmcif_files/1pj3.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj4.cif to af2_db/pdb_mmcif/mmcif_files/1pj4.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj5.cif to af2_db/pdb_mmcif/mmcif_files/1pj5.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj6.cif to af2_db/pdb_mmcif/mmcif_files/1pj6.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj7.cif to af2_db/pdb_mmcif/mmcif_files/1pj7.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj8.cif to af2_db/pdb_mmcif/mmcif_files/1pj8.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
  download failed: s3://alphafold2-agc/af2_db/pdb_mmcif/mmcif_files/1pj9.cif to af2_db/pdb_mmcif/mmcif_files/1pj9.cif An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.

Work dir:
  s3://agc-336191139269-us-east-1/project/NFCorePopular/userid/arlnocaj4nFMjG/context/bigMemCtx/nextflow-execution/runs/6a/c4b47d5f9ce61eb60b6f27317d336f

Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command

Relevant files

No response

System information

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.