Error in pbrun minimap2 with Parabricks 4.2.1-1 container

Hi,

I am using pbrun minimap2 with ONT reads with command:

docker run --gpus all --rm    \
       --volume $(pwd):/workdir --volume $(pwd):/output_dir \
        nvcr.io/nvidia/clara/clara-parabricks:4.2.1-1 \
        pbrun minimap2 \
        --preset map-ont \
        --index /workdir/workdir/Reference_genomic.mmi\
        --ref /workdir/workdir/Reference_genomic.fna \
        --out-bam /workdir/output_dir/ONT_minimap2_alignment.bam \
        --in-fq /workdir/all_ONT_long_Reads.fastq.gz

I get the following output:

[PB Info 2024-Jan-18 10:13:15] ------------------------------------------------------------------------------
[PB Info 2024-Jan-18 10:13:15] || Parabricks accelerated Genomics Pipeline ||
[PB Info 2024-Jan-18 10:13:15] || Version 4.2.1-1.beta4 ||
[PB Info 2024-Jan-18 10:13:15] || GPU-PBBWA mem, Sorting Phase-I ||
[PB Info 2024-Jan-18 10:13:15] ------------------------------------------------------------------------------
[PB Info 2024-Jan-18 10:13:15] Reading reference file.
[PB Info 2024-Jan-18 10:13:15] -------------------------------------
[PB Info 2024-Jan-18 10:13:15] Elapsed-Minutes Processed-Reads
[PB Info 2024-Jan-18 10:13:15] -------------------------------------
[PB Info 2024-Jan-18 10:13:15] 0.0 0
[PB Error 2024-Jan-18 10:13:16][src/reader.cpp:122] sizeOfSeq 114270 exceeds MAX_READ_SIZE 100000, expected sizeOfSeq < MAX_READ_SIZE, exiting.
For technical support visit Clara Parabricks v4.2.0 - NVIDIA Docs
Exiting…
Please visit NVIDIA Clara - NVIDIA Docs for detailed documentation

Is it possible to reset this value? I am guessing this error is popping up because it encountered a very long read in the read files.

Please advise.

I am having precisely the same issue with CLR (long) pacbio reads.

Given that minimap2 is supposed to handle long reads (that’s the whole reason for using it over bwa-mem), it isn’t great that it falls apart when a single long read comes along and exceeds 100kb. At the very least, it should be able to ignore enormous reads (1Mb+), but this parameter should be able to be set by the user as the technologies will keep increasing the read length over time.

We have found the issue and will have it fixed for the v4.3 release.

2 Likes

Works perfectly in ver 4.3.0-1.

Thanks for the fix!

1 Like