•Hardware Platform (Jetson / GPU): GPU- TITAN RTX
•DeepStream Version: 5.1
•TensorRT Version: 7.2.2
•NVIDIA GPU Driver Version: 460.56
•Issue Type: plugin error.
Following is the flow of my project.
I’m using TLT 3.0 for for the object detection application. The model is yolov4.
Then I pulled the docker image of deepstream-5.1-triton.
For the deployment purpose, I followed the deployment section of tlt models in deepstream:
- installation of Tensorrt oss for X86.
- TLT conveter installation
3)Modifying config file.
But when I try to run the python apps, I get an error related to nvinfer plugin.
Below is the error that I get,
**Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Atleast one of the sources is live
Error: Could not parse model engine file path
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
1 : rtsp://admin:Mukesh_M@192.168.0.132:554/Streaming/Channels/301
Starting pipeline
0:00:00.324415772 736 0x1fa08d0 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start: error: Configuration file parsing failed
0:00:00.324442800 736 0x1fa08d0 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start: error: Config file path: dstest3_pgie_config.txt
Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest3_pgie_config.txt
Exiting app**
I checked the model path as well as the config file path and everything is correct.
What could be the possible way to bypass this error?
**Note: For the same procedure (Considering small changes like GPU_ARCH), the same setup works well on my PC (Quadro P1000) and jetson tx2.