Deepstream-app: terminate called after throwing an instance of 'std::out_of_range'

I’m trying to make a deepstream app based on a custom model. The model is developed in tensorflow 2.2 using the Keras API. I’m exporting the model in onnx format and trying to run it in deepstream on a nano. With Jetpack 4.2 and deepstream 4.0.2 the model failed to parse. With JP 4.4 and DS 5.0.0 it fails with the following output.

   *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/dlinano/Deep-Stream-ONNX/config/../counter36.onnx_b1_fp16.engine open error
0:00:01.384407223  9808     0x1d828530 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/home/dlinano/Deep-Stream-ONNX/config/../counter36.onnx_b1_fp16.engine failed
0:00:01.384492952  9808     0x1d828530 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/home/dlinano/Deep-Stream-ONNX/config/../counter36.onnx_b1_fp16.engine failed, try rebuild
0:00:01.384525973  9808     0x1d828530 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /home/dlinano/Deep-Stream-ONNX/counter36.onnx
ONNX IR version:  0.0.7
Opset version:    12
Producer name:    keras2onnx
Producer version: 1.7.0
Domain:           onnxmltools
Model version:    0
Doc string:       Count all yer bees!
----------------------------------------------------------------
WARNING: [TRT]: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)
Aborted (core dumped)

I don’t know if the INT64 to INT32 is causing the problem. The original model is float32 end to end so I don’t even understand where that’s coming from.

Thanks,
Evan.

2 Likes