Fq2bam Error Received signal: 11

,

Hi Nvidia Team,

I’m receiving an error when trying to run the fq2bam tutorial sample on a Slurm cluster using a singularity container.

#SBATCH --job-name "parabricks_test"
#SBATCH -e /scott/parabricks_test/%x.e%j
#SBATCH -o /scott/parabricks_test/%x.e%j
#SBATCH -N 1 -c 12
#SBATCH --mem 170G
#SBATCH --gpus=1
#SBATCH --tmp=512G
#SBATCH -t 24:00:00

nvidia-smi

module load Singularity/3.7.0
module load python/3.7.6

singularity exec --cleanenv --nv \
    -S /localhd/$SLURM_JOBID:/mnt/scratch:512GB \
    -B $PWD:/workdir \
    -B $PWD:/outputdir \
    $singularity_cache/clara-parabricks_4_0_1-1.sif \
    pbrun fq2bam \
    --ref /workdir/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta \
    --in-fq /workdir/parabricks_sample/Data/sample_1.fq.gz /workdir/parabricks_sample/Data/sample_2.fq.gz \
    --out-bam /outputdir/fq2bam_output.bam

I see the following output:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02    Driver Version: 470.57.02    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  On   | 00000000:00:06.0 Off |                    0 |
| N/A   28C    P0    24W / 250W |      0MiB / 32510MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+


[Parabricks Options Mesg]: Checking argument compatibility
[Parabricks Options Mesg]: Automatically generating ID prefix
[Parabricks Options Mesg]: Read group created for /workdir/parabricks_sample/Data/sample_1.fq.gz and
/workdir/parabricks_sample/Data/sample_2.fq.gz
[Parabricks Options Mesg]: @RG\tID:HK3TJBCX2.1\tLB:lib1\tPL:bar\tSM:sample\tPU:HK3TJBCX2.1
[PB Info 2023-Apr-18 00:21:44] ------------------------------------------------------------------------------
[PB Info 2023-Apr-18 00:21:44] ||                 Parabricks accelerated Genomics Pipeline                 ||
[PB Info 2023-Apr-18 00:21:44] ||                              Version 4.0.1-1                             ||
[PB Info 2023-Apr-18 00:21:44] ||                       GPU-BWA mem, Sorting Phase-I                       ||
[PB Info 2023-Apr-18 00:21:44] ------------------------------------------------------------------------------
[M::bwa_idx_load_from_disk] read 0 ALT contigs
[PB Info 2023-Apr-18 00:22:15] GPU-BWA mem
[PB Info 2023-Apr-18 00:22:15] ProgressMeter    Reads           Base Pairs Aligned
[PB Info 2023-Apr-18 00:23:04] 5043564          580000000
[PB Info 2023-Apr-18 00:23:33] 10087128 1160000000
[PB Info 2023-Apr-18 00:24:03] 15130692 1740000000
[PB Info 2023-Apr-18 00:24:31] 20174256 2320000000
[PB Info 2023-Apr-18 00:24:58] 25217820 2900000000
[PB ESC[31mErrorESC[0m 2023-Apr-18 00:25:10][-unknown-:0] Received signal: 11
For technical support visit https://docs.nvidia.com/clara/parabricks/4.0.0/Help.html, exiting.
For technical support visit https://docs.nvidia.com/clara/parabricks/4.0.0/Help.html
Exiting...
Please visit https://docs.nvidia.com/clara/#parabricks for detailed documentation



Could not run fq2bam
Exiting pbrun ...

Do you have any insights into what this error is?
Thanks,
Scott

Running again with the following command got me a little more information out:

singularity exec --cleanenv --nv \
-S "$SCRATCH" \
-W "$SCRATCH" \
-B "$PWD":/workdir \
-B "$PWD":/outputdir \
$singularity_cache/clara-parabricks_4_0_0-1.sif \
pbrun fq2bam \
--ref /workdir/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta \
--in-fq /workdir/parabricks_sample/Data/sample_1.fq.gz /workdir/parabricks_sample/Data/sample_2.fq.gz \
--out-bam /outputdir/fq2bam_output.bam

The output:

[Parabricks Options Mesg]: Checking argument compatibility
[Parabricks Options Mesg]: Automatically generating ID prefix
[Parabricks Options Mesg]: Read group created for /workdir/parabricks_sample/Data/sample_1.fq.gz and
/workdir/parabricks_sample/Data/sample_2.fq.gz
[Parabricks Options Mesg]: @RG\tID:HK3TJBCX2.1\tLB:lib1\tPL:bar\tSM:sample\tPU:HK3TJBCX2.1
[PB Info 2023-May-04 15:24:52] ------------------------------------------------------------------------------
[PB Info 2023-May-04 15:24:52] ||                 Parabricks accelerated Genomics Pipeline                 ||
[PB Info 2023-May-04 15:24:52] ||                              Version 4.0.0-1                             ||
[PB Info 2023-May-04 15:24:52] ||                       GPU-BWA mem, Sorting Phase-I                       ||
[PB Info 2023-May-04 15:24:52] ------------------------------------------------------------------------------
[M::bwa_idx_load_from_disk] read 0 ALT contigs
[PB Info 2023-May-04 15:24:59] GPU-BWA mem
[PB Info 2023-May-04 15:24:59] ProgressMeter	Reads		Base Pairs Aligned
[PB Info 2023-May-04 15:25:53] 5043564		580000000
[PB Info 2023-May-04 15:26:35] 10087128	1160000000
[PB Info 2023-May-04 15:27:13] 15130692	1740000000
[PB Info 2023-May-04 15:27:57] 20174256	2320000000
[PB Info 2023-May-04 15:28:41] 25217820	2900000000
[PB Info 2023-May-04 15:29:24] 30261384	3480000000
[PB Info 2023-May-04 15:30:05] 35304948	4060000000
[PB Info 2023-May-04 15:30:45] 40348512	4640000000
[PB Error 2023-May-04 15:30:50][-unknown-:0] Received signal: 11
For technical support visit https://docs.nvidia.com/clara/parabricks/4.0.0/Help.html, exiting.
[PB Warning 2023-May-04 15:30:50][ParaBricks/src/check_error.cu:41] cudaSafeCall() failed at ParaBricks/src/mem_chain_kernel.cu/3823: driver shutting down
[PB Error 2023-May-04 15:30:50][ParaBricks/src/check_error.cu:44] No GPUs active, shutting down due to previous error., exiting.
For technical support visit https://docs.nvidia.com/clara/parabricks/4.0.0/Help.html

So I finally got the tutorial to run from the singularity container, and just for anyone that might happen to make the same error as me, this was the command that finally worked.

singularity exec --cleanenv --nv \
-B "$PWD":/workdir \
-B "$PWD":/outputdir \
--pwd /workdir \
$singularity_cache/clara-parabricks_4_0_1-1.sif \
pbrun fq2bam \
--ref /workdir/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta \
--in-fq /workdir/parabricks_sample/Data/sample_1.fq.gz /workdir/parabricks_sample/Data/sample_2.fq.gz \
--out-bam /outputdir/fq2bam_output.bam

The --pwd /workdir turned out to be quite important.
It wasn’t on the docker command given in the tutorial: FQ2BAM Tutorial - NVIDIA Docs

But it was present in the tool doc: fq2bam - NVIDIA Docs

Now to have some fun checking out what Parabricks can do!

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.