Workflow to call structural and copy number variants in somatic whole genome data
This snakemake workflow takes .bam
files, which were prepped according to
GATK best practices,
and calls CNVs and SVs. The workflow can process tumor samples
paired with normals or be run as a tumor-only analysis.
This tool is best used with a panel of normals (PoN) which should be generated according to the docs.
CNVnator runs tumor only according to the docs in the repo.
Manta can be run in tumor only or tumor/normal mode. Please refer to the docs.
The tool is running tumor only as described in the repo.
To run this workflow, the following tools need to be available:
- Add all sample ids to
samples.tsv
in the columnsample
. - Add sample type information, normal or tumor, to
units.tsv
. - Use the
analysis_output
folder from wgs_std_viper as input. - If a PoN was not created earlier, use the
analysis_output
folder from wgs_somatic_pon as input. as input.
- You need a reference
.fasta
file representing the genome used for mapping. In addition, an index file is required.
- The required files for the human reference genome GRCh38 can be downloaded from
google cloud.
The download can be manually done using the browser or using
gsutil
via the command line:
gsutil cp gs://genomics-public-data/resources/broad/hg38/v0/Homo_sapiens_assembly38.fasta /path/to/download/dir/
- If those resources are not available for your reference you may generate them yourself:
samtools faidx /path/to/reference.fasta
- CNVkit requires a panel of normals (PoN) which should be supplied. If you do not
have a PoN you can simply leave
""
instead to link the workflow to the output from wgs_somatic_pon. - This workflow is setup to filter the resulting
.vcf
files from CNVnator, Manta and TIDDIT. If this is undesired one could simply use an empty.bed
file for filtering. Otherwise, the SweGen database is a great resource as it contains specific.bed
files with normal variants for each of the three tools. - Add the paths of the different files to the
config.yaml
. The index file should be in the same directory as the reference.fasta
. - Make sure that the docker container versions are correct.
The workflow repository contains a small test dataset .tests/integration
which can be run like so:
cd .tests/integration
snakemake -s ../../Snakefile -j1 --use-singularity
The workflow is designed for WGS data meaning huge datasets which require a lot of compute power. For HPC clusters, it is recommended to use a cluster profile and run something like:
snakemake -s /path/to/Snakefile --profile my-awesome-profile