Trying to convert Ssd .etlt file into engine file using Deepstream App

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, i am trying to convert my exported ssd .etlt file into an engine file using deepstream. I do know that before doing so, i have to upgrade my cmake version, build tensorRT OSS, and replace the libnvinfer_plugin.so* with the newly generated. However, after doing so, i encountered an error:

0:00:00.323824735 5204 0x17712830 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: Parameter check failed at: …/builder/Network.cpp::addInput::992, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: [TRT]: UFFParser: Failed to parseInput for node Input
ERROR: [TRT]: UffParser: Parser error: Input: Failed to parse node - Invalid Tensor found at node Input
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.187895359 5204 0x17712830 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

not sure what does this error means and would appreciate your help. Sorry for the disturbance as i am very new to this

To be exact, what i did was follow the instructions exactly in this link: deepstream_tao_apps/TRT-OSS/Jetson/TRT7.1 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub in order to get the BatchTile Plugin
Then i replace the libnvinfer_plugin.so.7.1.3 (old) with the newly generated libnvinfer_plugin.so.7.1.3 file which result to this error

Hi,

Do you use the below configure file?
pgie_ssd_tao_config.txt

Please help to confirm if the below variable is well-setted.

[property]
...
uff-input-blob-name=Input

Thanks.

1 Like

Yes i did use the pgie_ssd_tao_config.txt file but still result in that error

After trying a few times, i does solve the error, however, i am still unable to recieve an inference output as here is the output that i have recieved:
Trying to create engine from model files
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
#assertion/home/sapphire/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,246
Aborted (core dumped)

Here is my config file: pgie_ssd_tlt_config.txt (1.9 KB). Really sorry for having to trouble you

Hi,

Thanks for sharing the configure file.

Usually the NMS error is caused by incorrect configuration.
It seems that you are setting the num-detected-classes into 1.
Do you only have one class for output?

Based on the source code below, the dimension cannot match correctly.

Thanks.

Sorry for the late reply, i have found out what is the error. Sorry for troubling you and thank you :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.