TensorRT out of memory in JetPack 4.3

I’m upgrading to JetPack 4.3 and in that process, I have to create a new TensorRT engine file for a custom Tiny Yolo3 network. I use the CUDA, cuDNN and TensorRT versions from JetPack 4.3 and the trt-yolo-app from GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson based on the latest commit before the app was removed (3a8957b2d985d7fc2498a0f070832eb145e809ca). I do not need Deepstream, just optimized inference with TensorRT in C++. I keep getting the error below even though I have tried with batch size = 1 and different maximum workspace settings. Building an engine based on the native Tiny Yolo3 also fails. Any ideas?

ERROR: Internal error: could not find any implementation for node mm1_19, try increasing the workspace size with IBuilder::setMaxWorkspaceSize()
ERROR: ../builder/tacticOptimizer.cpp (1461) - OutOfMemory Error in computeCosts: 0
trt-yolo-app: /home/nvidia/src/deepstream_reference_apps/yolo/lib/yolo.cpp:460: void Yolo::createYOLOEngine(nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `m_Engine != nullptr' failed.
Aborted (core dumped)

Hi,

trt-yolo-app is integrated to the Deepstream sample currently.
Could you try to use this example instead:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/

Thanks.

There is no executable in that directory. If i perform a make in /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo i just creates a .so file. I also tried $ deepstream-app -c deepstream_app_config_yoloV3.txt in the directory you mentioned but that fails with the following error:

** ERROR: <create_multi_source_bin:714>: Failed to create element 'src_bin_muxer'
** ERROR: <create_multi_source_bin:777>: create_multi_source_bin failed
** ERROR: <create_pipeline:1045>: create_pipeline failed
** ERROR: <main:632>: Failed to create pipeline
Quitting
App run failed

I cleaned the GStreamer cache and managed to build the engine via the deepstream-app. Now the question is if that engine will work directly with TensorRT without Deepstream.

Update: The engine does not work with the trt-yolo-app. It fails with:

ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin LReLU_TRT version 1
ERROR: safeDeserializationUtils.cpp (259) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
ERROR: INVALID_STATE: std::exception
ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

Do you have an updated example on how to perform inference with a Yolo3 engine in TensorRT without involving DeepStream.

Hi,

It’s recommended to upgrade trt-yolo-app into other samples.
trt-yolo-app is maintained with Deepstream3.0 and older TensorRT version, which is not compatible to JetPack4.3.

If you don’t want to use Deepstream, there are another sample in our TensorRT package for the YOLO model:
$ /usr/src/tensorrt/samples/python/yolov3_onnx/

Thanks.

Thanks, but as mentioned earlier I need a C++ interface to run the TensorRT yolo engine.

Any news on this topic? I use the trt-yolo-app as a base for my own tiny-yolov3 inference on images from a CSI camera that gets its images via libargus (Gstreamer is not flexible enough for our purposes). What should I use now?

Hi,

Please use our Deepstream SDK.

The trt-yolo-app is now integrated into our Deepstream package and you can find it here:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/

Thanks.