Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson TX2 • DeepStream Version 5.1 • JetPack Version (valid for Jetson only) 4.5.1 • TensorRT Version 7.1.3 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello, i am trying to convert my exported ssd .etlt file into an engine file using deepstream. I do know that before doing so, i have to upgrade my cmake version, build tensorRT OSS, and replace the libnvinfer_plugin.so* with the newly generated. However, after doing so, i encountered an error:
0:00:00.323824735 5204 0x17712830 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: Parameter check failed at: …/builder/Network.cpp::addInput::992, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: [TRT]: UFFParser: Failed to parseInput for node Input
ERROR: [TRT]: UffParser: Parser error: Input: Failed to parse node - Invalid Tensor found at node Input
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.187895359 5204 0x17712830 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
not sure what does this error means and would appreciate your help. Sorry for the disturbance as i am very new to this
After trying a few times, i does solve the error, however, i am still unable to recieve an inference output as here is the output that i have recieved:
Trying to create engine from model files
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors. #assertion/home/sapphire/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,246
Aborted (core dumped)
Here is my config file: pgie_ssd_tlt_config.txt (1.9 KB). Really sorry for having to trouble you
Usually the NMS error is caused by incorrect configuration.
It seems that you are setting the num-detected-classes into 1.
Do you only have one class for output?
Based on the source code below, the dimension cannot match correctly.