Could not run fq2bam as part of germline pipeline

hi,

I am getting the following error below when running pbrun germline. Sometime it works (can run completely), sometime it throws the error below. It happened in both v3.5.0 & v3.7.0.

For v3.5.0:

Please contact Parabricks-Support@nvidia.com for any questions
There is a forum for Q&A as well at https://forums.developer.nvidia.com/c/healthcare/Parabricks/290
Exiting...

Could not run fq2bam as part of germline pipeline
Exiting pbrun ...

For v3.7.0

[PB Info 2022-Mar-08 20:42:48] 41.8	 117.76 GB
[PB Info 2022-Mar-08 20:42:58] 42.8	 119.10 GB
Killed
For technical support visit https://docs.nvidia.com/clara/parabricks/3.7.0/index.html#how-to-get-help
Exiting...

Could not run fq2bam as part of germline pipeline
Exiting pbrun ...

GPU Info

$ nvidia-smi
Tue Mar  8 08:50:54 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01    Driver Version: 465.19.01    CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA Tesla T4     Off  | 00000000:00:0A.0 Off |                    0 |
| N/A   32C    P8    14W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA Tesla T4     Off  | 00000000:00:0B.0 Off |                    0 |
| N/A   31C    P8    15W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

The germline command and the full log (v3.5.0)

$ pbrun germline \
  --ref /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta \
  --in-fq \
    /tmp/workspace/datasets/WGS-LIS-XXX_R1.fastq.gz \
    /tmp/workspace/datasets/WGS-LIS-XXX_R2.fastq.gz \
  --knownSites /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.known_indels.vcf.gz \
  --out-bam        /tmp/workspace/output/output.bam \
  --out-variants   /tmp/workspace/output/output.vcf \
  --out-recal-file /tmp/workspace/output/report.txt \
  --x3 \
  --tmp-dir=/tmp/workspace/output
Please visit https://docs.nvidia.com/clara/#parabricks for detailed documentation


[Parabricks Options Mesg]: Automatically generating ID prefix
[Parabricks Options Mesg]: Read group created for /tmp/workspace/datasets/WGS-LIS-XXX_R1.fastq.gz and
/tmp/workspace/datasets/WGS-LIS-XXX_R2.fastq.gz
[Parabricks Options Mesg]: @RG\tID:H3WFJDSXX.1\tLB:lib1\tPL:bar\tSM:sample\tPU:H3WFJDSXX.1

python3 /parabricks/run_pipeline.py fq2bam --ref /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta --in-fq /tmp/workspace/datasets/WGS-LIS-XXX_R1.fastq.gz /tmp/workspace/datasets/WGS-LIS-XXX_R2.fastq.gz @RG\tID:H3WFJDSXX.1\tLB:lib1\tPL:bar\tSM:sample\tPU:H3WFJDSXX.1 --knownSites /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.known_indels.vcf.gz --out-bam /tmp/workspace/output/output.bam --out-recal-file /tmp/workspace/output/report.txt --memory-limit 354 --num-cpu-threads 0 --tmp-dir /tmp/workspace/output/HIJGGQID --num-gpus 2 --x3
Please visit https://docs.nvidia.com/clara/#parabricks for detailed documentation


[Parabricks Options Mesg]: Checking argument compatibility
[Parabricks Options Mesg]: Read group created for /tmp/workspace/datasets/WGS-LIS-XXX_R1.fastq.gz and
/tmp/workspace/datasets/WGS-LIS-XXX_R2.fastq.gz
[Parabricks Options Mesg]: @RG\tID:H3WFJDSXX.1\tLB:lib1\tPL:bar\tSM:sample\tPU:H3WFJDSXX.1
g 2 b 0 B 2 P 4 s 1 r 0 o 2 m 1 z 4 f 2 v 0 M 2 name /tmp/workspace/output/output.bam report /tmp/workspace/output/report.txt K /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.known_indels.vcf.gz
/usr/local/cuda/.pb/binaries//bin/bwa mem /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta /tmp/workspace/datasets/WGS-LIS-XXX_R1.fastq.gz /tmp/workspace/datasets/WGS-LIS-XXX_R2.fastq.gz @RG\tID:H3WFJDSXX.1\tLB:lib1\tPL:bar\tSM:sample\tPU:H3WFJDSXX.1 -Z ./pbOpts.txt
------------------------------------------------------------------------------
||                 Parabricks accelerated Genomics Pipeline                 ||
||                              Version v3.5.0                              ||
||                       GPU-BWA mem, Sorting Phase-I                       ||
||                  Contact: Parabricks-Support@nvidia.com                  ||
------------------------------------------------------------------------------
[M::bwa_idx_load_from_disk] read 0 ALT contigs

GPU-BWA mem
ProgressMeter	Reads		Base Pairs Aligned
[08:52:08]	5033176		770000000
[08:53:10]	10066352	1520000000
[08:54:12]	15099528	2280000000
[08:55:15]	20132704	3050000000
[08:56:17]	25165880	3800000000
[08:57:19]	30199056	4550000000
[08:58:21]	35232232	5320000000
[08:59:24]	40265408	6070000000
[09:00:28]	45298584	6840000000
[09:01:31]	50331760	7600000000
[09:02:34]	55364936	8370000000
[09:03:37]	60398112	9130000000
[09:04:40]	65431288	9880000000
[09:05:42]	70464464	10640000000
[09:06:43]	75497640	11390000000
[09:07:46]	80530816	12160000000
[09:08:48]	85563992	12920000000
[09:09:51]	90597168	13680000000
[09:10:54]	95630344	14440000000
[09:11:56]	100663520	15190000000
[09:12:57]	105696696	15960000000
[09:13:58]	110729872	16720000000
[09:15:00]	115763048	17480000000
[09:16:01]	120796224	18240000000
[09:17:01]	125829400	19010000000
[09:18:02]	130862576	19760000000
[09:19:03]	135895752	20530000000
[09:20:06]	140928928	21280000000
[09:21:11]	145962104	22050000000
[09:22:11]	150995280	22790000000
[09:23:12]	156028456	23560000000
[09:24:15]	161061632	24320000000
[09:25:17]	166094808	25080000000
[09:26:18]	171127984	25830000000
[09:27:19]	176161160	26590000000
[09:28:23]	181194336	27360000000
[09:29:26]	186227512	28120000000
[09:30:28]	191260688	28880000000
[09:31:31]	196293864	29630000000
[09:32:34]	201327040	30400000000
[09:33:38]	206360216	31160000000
[09:34:40]	211393392	31920000000
[09:35:41]	216426568	32690000000
[09:36:45]	221459744	33440000000
[09:37:47]	226492920	34200000000
[09:38:48]	231526096	34960000000
[09:39:51]	236559272	35720000000
[09:40:53]	241592448	36480000000
[09:41:55]	246625624	37250000000
[09:42:57]	251658800	38000000000
[09:43:58]	256691976	38760000000
[09:45:00]	261725152	39520000000
[09:46:01]	266758328	40280000000
[09:47:03]	271791504	41040000000
[09:48:06]	276824680	41800000000
[09:49:07]	281857856	42560000000
[09:50:09]	286891032	43320000000
[09:51:11]	291924208	44080000000
[09:52:13]	296957384	44850000000
[09:53:14]	301990560	45600000000
[09:54:15]	307023736	46360000000
[09:55:17]	312056912	47110000000
[09:56:20]	317090088	47880000000
[09:57:21]	322123264	48640000000
[09:58:22]	327156440	49400000000
[09:59:24]	332189616	50160000000
[10:00:25]	337222792	50910000000
[10:01:26]	342255968	51680000000
[10:02:28]	347289144	52440000000
[10:03:31]	352322320	53200000000
[10:04:31]	357355496	53960000000
[10:05:31]	362388672	54730000000
[10:06:33]	367421848	55470000000
[10:07:36]	372455024	56220000000
[10:08:38]	377488200	57000000000
[10:09:41]	382521376	57750000000
[10:10:44]	387554552	58520000000
[10:11:46]	392587728	59270000000
[10:12:48]	397620904	60030000000
[10:13:49]	402654080	60800000000
[10:14:49]	407687256	61560000000
[10:15:51]	412720432	62320000000
[10:16:54]	417753608	63080000000
[10:17:55]	422786784	63840000000
[10:18:56]	427819960	64600000000
[10:19:57]	432853136	65350000000
[10:20:58]	437886312	66120000000
[10:21:59]	442919488	66880000000
[10:23:01]	447952664	67650000000
[10:24:02]	452985840	68410000000
[10:25:05]	458019016	69170000000
[10:26:05]	463052192	69920000000
[10:27:07]	468085368	70680000000
[10:28:09]	473118544	71450000000
[10:29:13]	478151720	72200000000
[10:30:16]	483184896	72970000000
[10:31:21]	488218072	73720000000
[10:32:22]	493251248	74480000000
[10:33:24]	498284424	75250000000
[10:34:26]	503317600	76000000000
[10:35:27]	508350776	76770000000
[10:36:29]	513383952	77520000000
[10:37:31]	518417128	78280000000
[10:38:33]	523450304	79030000000
[10:39:36]	528483480	79790000000
[10:40:37]	533516656	80560000000
[10:41:39]	538549832	81310000000
[10:42:41]	543583008	82080000000
[10:43:44]	548616184	82840000000
[10:44:47]	553649360	83600000000
[10:45:48]	558682536	84360000000
[10:46:52]	563715712	85120000000
[10:47:54]	568748888	85880000000
[10:48:58]	573782064	86640000000
[10:50:00]	578815240	87390000000
[10:51:03]	583848416	88160000000
[10:52:05]	588881592	88920000000
[10:53:05]	593914768	89680000000
[10:54:07]	598947944	90450000000
[10:55:08]	603981120	91200000000
[10:56:10]	609014296	91950000000
[10:57:12]	614047472	92720000000
[10:58:13]	619080648	93480000000
[10:59:15]	624113824	94230000000
[11:00:16]	629147000	94990000000
[11:01:16]	634180176	95750000000
[11:02:17]	639213352	96510000000
[11:03:19]	644246528	97270000000
[11:04:21]	649279704	98030000000
[11:05:23]	654312880	98800000000
[11:06:23]	659346056	99560000000
[11:07:25]	664379232	100330000000
[11:08:24]	669412408	101080000000
[11:09:24]	674445584	101840000000
[11:10:24]	679478760	102590000000
[11:11:26]	684511936	103360000000
[11:12:28]	689545112	104120000000
[11:13:31]	694578288	104880000000
[11:14:33]	699611464	105650000000
[11:15:35]	704644640	106400000000
[11:16:36]	709677816	107150000000
[11:17:38]	714710992	107920000000
[11:18:40]	719744168	108680000000
[11:19:42]	724777344	109440000000
[11:20:44]	729810520	110200000000
[11:21:45]	734843696	110960000000
[11:22:47]	739876872	111730000000
[11:23:50]	744910048	112480000000
[11:24:52]	749943224	113250000000
[11:25:53]	754976400	114000000000
[11:26:57]	760009576	114760000000
[11:27:59]	765042752	115510000000
[11:29:01]	770075928	116270000000
[11:30:02]	775109104	117030000000
[11:31:05]	780142280	117800000000
[11:32:07]	785175456	118560000000
[11:33:09]	790208632	119310000000
[11:34:10]	795241808	120080000000
[11:35:13]	800274984	120860000000
[11:36:15]	805308160	121600000000
[11:37:16]	810341336	122350000000
[11:38:17]	815374512	123110000000
[11:39:21]	820407688	123890000000
[11:40:22]	825440864	124640000000
[11:41:24]	830474040	125410000000
[11:42:25]	835507216	126160000000
[11:43:26]	840540392	126920000000
[11:44:27]	845573568	127680000000
[11:45:28]	850606744	128450000000
[11:46:29]	855639920	129200000000
[11:47:30]	860673096	129950000000
[11:48:31]	865706272	130710000000
[11:49:31]	870739448	131470000000
[11:50:32]	875772624	132230000000
[11:51:34]	880805800	132990000000
[11:52:36]	885838976	133760000000
[11:53:36]	890872152	134510000000
[11:54:37]	895905328	135270000000
[11:55:40]	900938504	136040000000
[11:56:43]	905971680	136800000000
[11:57:43]	911004856	137560000000
[11:58:45]	916038032	138320000000
[11:59:48]	921071208	139080000000
[12:00:52]	926104384	139830000000
[12:01:53]	931137560	140600000000

GPU-BWA Mem time: 11514.969953 seconds
GPU-BWA Mem is finished.

GPU Sorting, Marking Dups, BQSR
ProgressMeter	SAM Entries Completed

Total GPU-BWA Mem + Sorting + MarkingDups + BQSR Generation + BAM writing
Processing time: 11514.974674 seconds

[main] CMD: PARABRICKS mem -Z ./pbOpts.txt /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta /tmp/workspace/datasets/WGS-LIS-XXX_R1.fastq.gz /tmp/workspace/datasets/WGS-LIS-XXX_R2.fastq.gz @RG\tID:H3WFJDSXX.1\tLB:lib1\tPL:bar\tSM:sample\tPU:H3WFJDSXX.1
[main] Real time: 11520.609 sec; CPU: 91787.964 sec
------------------------------------------------------------------------------
||        Program:                      GPU-BWA mem, Sorting Phase-I        ||
||        Version:                                            v3.5.0        ||
||        Start Time:                       Tue Mar  8 08:50:55 2022        ||
||        End Time:                         Tue Mar  8 12:02:56 2022        ||
||        Total Time:                           192 minutes 1 second        ||
------------------------------------------------------------------------------
/usr/local/cuda/.pb/binaries//bin/sort -sort_unmapped -ft 10 -gb 354
------------------------------------------------------------------------------
||                 Parabricks accelerated Genomics Pipeline                 ||
||                              Version v3.5.0                              ||
||                             Sorting Phase-II                             ||
||                  Contact: Parabricks-Support@nvidia.com                  ||
------------------------------------------------------------------------------
progressMeter - Percentage
[12:02:59]	0.0	 0.00 GB
[12:03:09]	0.0	 3.37 GB
[12:03:19]	0.0	 3.37 GB
[12:03:29]	3.0	 3.25 GB
[12:03:39]	3.6	 3.88 GB
[12:03:49]	5.8	 4.01 GB
[12:03:59]	5.9	 4.02 GB
[12:04:09]	8.2	 3.86 GB
[12:04:19]	8.8	 3.86 GB
[12:04:29]	10.4	 3.97 GB
[12:04:39]	11.7	 3.51 GB
[12:04:49]	13.4	 3.57 GB
[12:04:59]	14.7	 3.45 GB
[12:05:09]	16.5	 3.41 GB
[12:05:19]	17.8	 3.38 GB
[12:05:29]	19.4	 3.39 GB
[12:05:39]	20.8	 3.43 GB
[12:05:49]	22.5	 3.49 GB
[12:05:59]	23.7	 3.35 GB
[12:06:09]	26.2	 3.21 GB
[12:06:19]	27.2	 3.20 GB
[12:06:29]	29.6	 3.32 GB
[12:06:39]	31.8	 3.24 GB
[12:06:49]	32.9	 3.28 GB
[12:06:59]	35.6	 3.29 GB
[12:07:09]	37.8	 3.31 GB
[12:07:19]	38.9	 3.27 GB
[12:07:29]	41.6	 3.25 GB
[12:07:39]	43.4	 3.31 GB
[12:07:49]	44.6	 3.32 GB
[12:07:59]	46.9	 2.83 GB
[12:08:09]	49.0	 3.39 GB
[12:08:19]	50.7	 3.54 GB
[12:08:29]	52.5	 3.44 GB
[12:08:39]	54.5	 3.48 GB
[12:08:49]	56.2	 3.34 GB
[12:08:59]	57.9	 3.31 GB
[12:09:09]	60.2	 3.32 GB
[12:09:19]	61.8	 3.08 GB
[12:09:29]	63.9	 3.26 GB
[12:09:39]	66.8	 3.04 GB
[12:09:49]	68.3	 2.69 GB
[12:09:59]	70.8	 3.28 GB
[12:10:09]	73.2	 3.29 GB
[12:10:19]	74.2	 3.49 GB
[12:10:29]	76.6	 3.41 GB
[12:10:39]	78.4	 3.34 GB
[12:10:49]	80.1	 3.26 GB
[12:10:59]	82.3	 3.56 GB
[12:11:09]	84.0	 3.23 GB
[12:11:19]	86.2	 3.41 GB
[12:11:29]	88.3	 3.23 GB
[12:11:39]	90.0	 2.83 GB
[12:11:49]	94.1	 2.30 GB
[12:11:59]	99.8	 1.04 GB
Sorting and Marking: 550.220 seconds
------------------------------------------------------------------------------
||        Program:                                  Sorting Phase-II        ||
||        Version:                                            v3.5.0        ||
||        Start Time:                       Tue Mar  8 12:02:58 2022        ||
||        End Time:                         Tue Mar  8 12:12:09 2022        ||
||        Total Time:                           9 minutes 11 seconds        ||
------------------------------------------------------------------------------
/usr/local/cuda/.pb/binaries//bin/postsort /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.fasta -o /tmp/workspace/output/output.bam -sort_unmapped -ft 4 -wt 2 -zt 20 -bq 2 -gb 354 -a /tmp/workspace/output/report.txt /tmp/workspace/parabricks_sample/Ref/Homo_sapiens_assembly38.known_indels.vcf.gz
------------------------------------------------------------------------------
||                 Parabricks accelerated Genomics Pipeline                 ||
||                              Version v3.5.0                              ||
||                         Marking Duplicates, BQSR                         ||
||                  Contact: Parabricks-Support@nvidia.com                  ||
------------------------------------------------------------------------------
progressMeter -	Percentage
[12:12:20]	0.0	 7.75 GB
[12:12:30]	0.0	 14.43 GB
[12:12:40]	0.0	 23.13 GB
[12:12:50]	0.0	 26.68 GB
[12:13:00]	0.0	 33.42 GB
[12:13:10]	0.0	 41.93 GB
[12:13:20]	0.0	 46.70 GB
[12:13:30]	0.0	 53.01 GB
[12:13:40]	0.5	 57.22 GB
[12:13:50]	1.2	 60.33 GB
[12:14:00]	1.9	 64.30 GB
[12:14:10]	2.8	 67.17 GB
[12:14:20]	3.3	 71.06 GB
[12:14:30]	3.7	 75.51 GB
[12:14:40]	5.4	 76.76 GB
[12:14:50]	6.0	 81.04 GB
[12:15:00]	6.6	 83.75 GB
[12:15:10]	7.3	 87.56 GB
[12:15:20]	8.1	 91.70 GB
[12:15:30]	8.6	 96.45 GB
[12:15:40]	8.6	 102.03 GB
[12:15:50]	10.0	 102.34 GB
[12:16:00]	10.8	 105.44 GB
[12:16:10]	11.5	 109.42 GB
[12:16:20]	12.2	 113.35 GB
[12:16:30]	12.9	 116.60 GB
[12:16:40]	13.6	 120.90 GB
Please contact Parabricks-Support@nvidia.com for any questions
There is a forum for Q&A as well at https://forums.developer.nvidia.com/c/healthcare/Parabricks/290
Exiting...

Could not run fq2bam as part of germline pipeline
Exiting pbrun ...
CPU times: user 4min 23s, sys: 54.7 s, total: 5min 18s
Wall time: 3h 26min 1s

How can I solve this?

Hi @tj_tsai

Can you please let us know what is tmp-dir disk size and memory size on the server?
And I would recommend you to not use the option --x3 .

what is tmp-dir disk size?

$ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 2.0T 364G 1.6T 19% /

what is memory size?

$ cat /proc/meminfo
MemTotal:       742678696 kB
MemFree:         5040516 kB
MemAvailable:   667160960 kB
Buffers:          211692 kB
Cached:         651309708 kB
SwapCached:            0 kB
Active:         219488536 kB
Inactive:       483490148 kB
Active(anon):   48777916 kB
Inactive(anon):  4238844 kB
Active(file):   170710620 kB
Inactive(file): 479251304 kB
Unevictable:       18592 kB
Mlocked:           18592 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:               940 kB
Writeback:             0 kB
AnonPages:      51138568 kB
Mapped:          7602100 kB
Shmem:           4263292 kB
KReclaimable:   18236480 kB
Slab:           28392896 kB
SReclaimable:   18236480 kB
SUnreclaim:     10156416 kB
KernelStack:       80080 kB
PageTables:       236456 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    371339348 kB
Committed_AS:   244735276 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      231388 kB
VmallocChunk:          0 kB
Percpu:          2650624 kB
HardwareCorrupted:     0 kB
AnonHugePages:    892928 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:    16928608 kB
DirectMap2M:    351121408 kB
DirectMap1G:    389021696 kB

I would recommend you to not use the option --x3 .

Where the --x3 option is provided from the earlier NV members for debug use.

hey @mdemouth
Do you have any clues or tools to debug the error case (killed when running germline)?
or is there any other ways to dump more information about the situation?

Hi @tj_tsai ,

Is this a shared node on a cluster? Are there other things running on this system?

hi @mdemouth ,
Yes, I run it in a pod (in a node on a cluster).
There are other things running in other pods on the same node.

hi @mdemouth,
if parabricks is running in a shared node (with 128GB of CPU RAN),
is it recommended to use the parameter --memory-limit 128 (unit: GB) or some other way?

Hi,

This is likely what produces the issue.
We recommend to reserve the full node for such job to prevent this issue to happen.

--memory-limit tells how much RAM is available for Parabricks job. But it does not guarantee the system has this much RAM on a shared node

Best,
Myrieme

@mdemouth

About the --memory-limit experiments.

I have tried the parameter --memory-limit in an EC2 instance of type gdnn.2xlarge (T4, 8vCPU, 32GB) on AWS. If I run germline without the parameter --memory-limit (i.e. by default), the output.vcf can be generated successfully. But if I pass the parameter --memory-limit 28 or 26 or 20 to germline, it will get the OOM signal (killed by OOM-killer) when running Program3 (Marking Duplicates, BQSR). Actually, germline just works for less than or equal to 16.

The strange thing is that the instance doesn’t use shared memory, why does it get OOM with --memory-limit 28 on a VM with 32GB of RAM (avaialble: 30GB) ?

Here is the germline script.

# germline.sh
pbrun germline \
  --ref parabricks_sample/Ref/Homo_sapiens_assembly38.fasta \
  --in-fq \
    datasets/WGS-LIS-XXX_R1.fastq.gz \
    datasets/WGS-LIS-XXX_R2.fastq.gz \
  --knownSites parabricks_sample/Ref/Homo_sapiens_assembly38.known_indels.vcf.gz \
  --out-bam output.bam \
  --out-variants output.vcf \
  --out-recal-file report.txt \
  --memory-limit 28

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.