Error - Some tactics do not have sufficient workspace memory to run

This is the exact error that I am getting. For workspace size, I have tried 8000, 3000, 4718592000, and 1000000000 -

$ sudo ./tlt-converter -k cXQ2bXFpNzN1bnQzNGhpZnR0b2ExNGs4dXI6ZWRiZGIyMzQtZmYyZS00ZmMwLTk4NTItOGZhMjMzZDc1OTM1 -d 3,720,1280 -o output_bbox/BiasAdd,output_cov/Sigmoid -i nchw -m 64 -t int8 -e ~/resnet18_detector.trt -c ~/calibration.bin ~/resnet18_detector.etlt -w 1000000000
[INFO] Reading Calibration Cache for calibrator: EntropyCalibration2
[INFO] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[INFO] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[INFO]
[INFO] --------------- Layers running on DLA:
[INFO]
[INFO] --------------- Layers running on GPU:
[INFO] conv1/convolution + activation_1/Relu, block_1a_conv_1/convolution + block_1a_relu_1/Relu, block_1a_conv_2/convolution, block_1a_conv_shortcut/convolution + add_1/add + block_1a_relu/Relu, block_1b_conv_1/convolution + block_1b_relu_1/Relu, block_1b_conv_2/convolution + add_2/add + block_1b_relu/Relu, block_2a_conv_1/convolution + block_2a_relu_1/Relu, block_2a_conv_2/convolution, block_2a_conv_shortcut/convolution + add_3/add + block_2a_relu/Relu, block_2b_conv_1/convolution + block_2b_relu_1/Relu, block_2b_conv_2/convolution + add_4/add + block_2b_relu/Relu, block_3a_conv_1/convolution + block_3a_relu_1/Relu, block_3a_conv_2/convolution, block_3a_conv_shortcut/convolution + add_5/add + block_3a_relu/Relu, block_3b_conv_1/convolution + block_3b_relu_1/Relu, block_3b_conv_2/convolution + add_6/add + block_3b_relu/Relu, block_4a_conv_1/convolution + block_4a_relu_1/Relu, block_4a_conv_2/convolution, block_4a_conv_shortcut/convolution + add_7/add + block_4a_relu/Relu, block_4b_conv_1/convolution + block_4b_relu_1/Relu, block_4b_conv_2/convolution + add_8/add + block_4b_relu/Relu, output_cov/convolution, output_cov/Sigmoid, output_bbox/convolution,
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
Killed

I referenced these posts, but for some reason I am still getting the error-

and

and

Device I am working with is Xavier NX
Jetpack 4.4
Deepstream 5.0
Swapfile increased by 8GB set per this instruction (total of 12GB) - Creating a Swap file
And I am trying to follow this - https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html

If I am missing something here, I would appreciate any pointer.

Thank you,
Jae

solved by lowering the batch size

Hi, where did you changed the batch size?
Did you had to train the model all over again?

Not needed to train the model again.
Set “-m” to change.