Issue with Workspace Size When Running pbrun fq2bamfast on Clara Parabricks 4.3.0

I am encountering an issue while running pbrun fq2bamfast using NVIDIA Clara Parabricks version 4.3.0-1. The error message I receive is: “Workspace not big enough, expected desiredSize <= cubWorkspaceSize.”

I executed the following commands in a Docker container:

docker run --gpus "device=0" -it --volume /home/user_home/ylf/data:/input_data nvcr.io/nvidia/clara/clara-parabricks:4.3.0-1 /bin/bash

pbrun fq2bamfast --bwa-cpu-thread-pool 16 --ref GRCh38.d1.vd1.fa --in-fq SRR7890824.trimmo_1P SRR7890824.trimmo_2P --out-bam clara_SRR78924.bam --gpusort --gpuwrite --tmp-dir /input_data

Here are the details of my environment:

  • Server: Local single-node server
  • Memory: 500 GB
  • GPUs: 4x RTX A6000, each with 48 GB of VRAM
  • CPUs: 2x Intel(R) Xeon(R) Platinum 8373 (144 threads in total)
  • Storage (relevant partitions):
    (base) [ylf@a6000 ~]$ df -lh Filesystem Size Used Avail Use% Mounted on devtmpfs 252G 0 252G 0% /dev tmpfs 252G 0 252G 0% /dev/shm tmpfs 252G 20M 252G 1% /run
    tmpfs 252G 0 252G 0% /sys/fs/cgroup
    /dev/mapper/rl-root 70G 54G 17G 78% /
    /dev/mapper/rl-home 856G 68G 789G 8% /home
    /dev/nvme0n1p2 1014M 888M 127M 88% /boot
    /dev/nvme0n1p1 599M 5.8M 594M 1% /boot/efi
    /dev/sda 22T 2.9T 18T 14% /home/user_home
    tmpfs 51G 24K 51G 1% /run/user/42
    tmpfs 51G 4.0K 51G 1% /run/user/1014

The fastq files, SRR7890824.trimmo_1P and SRR7890824.trimmo_2P, are each approximately 205 GB in size.

run log:

[Parabricks Options Mesg]: Checking argument compatibility
[Parabricks Options Mesg]: Automatically generating ID prefix
[Parabricks Options Mesg]: Read group created for /input_data/SRR7890824.trimmo_1P and /input_data/SRR7890824.trimmo_2P
[Parabricks Options Mesg]: @RG\tID:SRR7890824.1.1\tLB:lib1\tPL:bar\tSM:sample\tPU:SRR7890824.1.1
[PB Info 2024-Mar-28 10:41:30] ------------------------------------------------------------------------------
[PB Info 2024-Mar-28 10:41:30] ||                 Parabricks accelerated Genomics Pipeline                 ||
[PB Info 2024-Mar-28 10:41:30] ||                              Version 4.3.0-1                             ||
[PB Info 2024-Mar-28 10:41:30] ||                      GPU-PBBWA mem, Sorting Phase-I                      ||
[PB Info 2024-Mar-28 10:41:30] ------------------------------------------------------------------------------
[PB Info 2024-Mar-28 10:41:30] Mode = pair-ended-gpu
[PB Info 2024-Mar-28 10:41:30] Running with 4 GPU(s), using 4 stream(s) per device with 16 worker threads per GPU
[PB Info 2024-Mar-28 10:41:40] #  0  0  0  0  0   0 pool:  0 0 bases/GPU/minute: 0.0
[PB Info 2024-Mar-28 10:41:50] # 16  0  3  0  0   0 pool: 15 1435561660 bases/GPU/minute: 2153342490.0
[PB Info 2024-Mar-28 10:42:00] #  7  0  5  0  0   0 pool: 23 3303803447 bases/GPU/minute: 2802362680.5
[PB Info 2024-Mar-28 10:42:10] # 26  0  6  0  0   0 pool: 10 5191614439 bases/GPU/minute: 2831716488.0
[PB Info 2024-Mar-28 10:42:20] # 18  0  6  0  0   0 pool: 18 7059782437 bases/GPU/minute: 2802251997.0
......
[PB Info 2024-Mar-28 10:50:50] #  5  0  4  0  0   0 pool: 43 50432955844 bases/GPU/minute: 1505101711.5
[PB Info 2024-Mar-28 10:51:00] #  0  0  0  0  0   0 pool: 71 51318219378 bases/GPU/minute: 1327895301.0
[PB Info 2024-Mar-28 10:51:10] #  2  0  6  0  0   0 pool: 45 51928108455 bases/GPU/minute: 914833615.5
[PB Info 2024-Mar-28 10:51:20] #  0  0  0  0  0   0 pool: 71 52823192686 bases/GPU/minute: 1342626346.5
[PB Info 2024-Mar-28 10:51:30] # 14  0  0  0  0   0 pool: 41 54023253713 bases/GPU/minute: 1800091540.5
[PB Info 2024-Mar-28 10:51:40] #  0  0  0  0  0   0 pool: 71 54918316026 bases/GPU/minute: 1342593469.5
[PB Info 2024-Mar-28 10:51:50] #  0  0  0  0  0   0 pool: 70 55478973487 bases/GPU/minute: 840986191.5
[PB Info 2024-Mar-28 10:52:00] #  4  0  3  1  0   0 pool: 43 55823232626 bases/GPU/minute: 516388708.5
[PB Info 2024-Mar-28 10:52:10] #  0  0  0  0  0   0 pool: 71 56423277992 bases/GPU/minute: 900068049.0
[PB Info 2024-Mar-28 10:52:20] # 34  0  6  0  0   0 pool: 11 57308539298 bases/GPU/minute: 1327891959.0
[PB Info 2024-Mar-28 10:52:30] #  0  0  0  0  0   0 pool: 71 58223339419 bases/GPU/minute: 1372200181.5
[PB Info 2024-Mar-28 10:52:40] #  0  0  0  0  0   0 pool: 71 58823383734 bases/GPU/minute: 900066472.5
[PB Info 2024-Mar-28 10:52:50] #  0  0  0  0  0   0 pool: 71 59423497320 bases/GPU/minute: 900170379.0
[PB Info 2024-Mar-28 10:53:00] #  5  0  4  0  0   0 pool: 43 60328704814 bases/GPU/minute: 1357811241.0
[PB Info 2024-Mar-28 10:53:10] #  0  0  0  0  0   0 pool: 71 61223873594 bases/GPU/minute: 1342753170.0
[PB Info 2024-Mar-28 10:53:20] #  0  0  0  0  0   0 pool: 69 62089491874 bases/GPU/minute: 1298427420.0
[PB Error 2024-Mar-28 10:53:26][src/internal/pb_fmindex.cu:1336] Workspace not big enough, expected desiredSize <= cubWorkspaceSize, exiting.
For technical support visit https://docs.nvidia.com/clara/index.html#parabricks
Exiting...

Could not run fq2bamfast
Exiting pbrun ...

Thank you in advance for your assistance!

Hello lifeng.yan2, thank you for the bug report.

We were able to reproduce your bug by using the NIH SRA accession SRR7890824. The issue has been fixed and the update will be included in the next release of Parabricks.

1 Like

Thanks!

Hi @dpuleri was the update released? I’m running into the same error using the same version of Parabricks (v4.3.0-1).

Hello @jmccafferty, the update has not been released yet. Version v4.3.1 will be released in early summer. In the meantime, you can use fq2bam for the alignment. Please let me know if there are any other things I can help with.

Hello @jmccafferty and @lifeng.yan2, v4.3.1 has been released with a fix for the issue you reported.

Thank you!