TensorRT Engine Creation Fails with “tensor volume exceeds 2147483648” on MobileNet SSD v2 in DeepStream 7.1 (Jetson Orin Nano)

Subject: TensorRT Engine Creation Fails with “tensor volume exceeds 2147483648” on MobileNet SSD v2 in DeepStream 7.1 (Jetson Orin Nano)


Hello NVIDIA Developer Forum,

I’m attempting to deploy the TensorFlow ssd_mobilenet_v2_coco_2018_03_29 model on a Jetson Orin Nano 8GB using DeepStream 7.1 and TensorRT, but the engine build fails with a tensor-volume overflow. I’d appreciate any guidance or known-working variants.


1. Platform & Software Versions

  • Hardware: Jetson Orin Nano 8GB
  • OS: Ubuntu 22.04 (JetPack 6.2)
  • DeepStream: v7.1
  • TensorRT: 10.3
  • CUDA: 12.6
  • cuDNN: 9.3
  • ONNX opset: 13

2. Model Conversion

# Download from TF Model Zoo
# Convert to ONNX
python3 -m tf2onnx.convert \
  --saved-model ssd_mobilenet_v2_coco_2018_03_29/saved_model \
  --output ssd_mobilenet_v2.onnx \
  --opset 13 \
  --inputs input_tensor:0[1,300,300,3] \
  --outputs detection_boxes,detection_scores,detection_classes,num_detections

Verified ONNX input:

name: "image_tensor:0"
shape: [1, 300, 300, 3]

3. DeepStream Config Snippet

[infer_config]
infer-dims=3;300;300
output-blob-names=detection_boxes,detection_scores,detection_classes,num_detections
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infercustomparser.so
# Alternatively tried nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

4. Error Message

When running deepstream-app -c dstest1_pgie_config.txt, I get:

ITensor::getDimensions: Error Code 4: API Usage Error 
(Conv__6197_output: tensor volume exceeds 2147483648, dimensions are [1024,96,150,150])

It seems the convolutional tensor [1024×96×150×150] exceeds the 2 GB limit during parsing.


5. Questions / Requests

  1. Workarounds or config tweaks to prevent tensor-volume overflows in DeepStream/TensorRT?
  2. Model modifications (e.g., pruning, layer fusion, reshape) you could recommend for MobileNet SSD v2/v3 on 8 GB Jetson?
  3. Known MobileNet SSD variants (or custom SSD models) confirmed to build and run cleanly with DeepStream 7.1 on Orin Nano.
  4. Any reference config files or sample projects for a MobileNet SSD in DeepStream would be extremely helpful.

Thank you in advance for your insights!


Tagging: deepstream tensorrt onnx #JetsonOrin #MobileNetSSD