DeepStream run it on a single image?

Hi all,

I’ve kind of been going around in circles and would really appreciate some guidance. I would like to deploy a yolov3 model on a TX2. I have a .trt serialized engine but I am struggling to load it in C++ and run inference. I came across deepstream but it seems to only be for video…is there a way to use deepstream to run inference on a single image from a .trt engine file?

Additionally, if there are other methods to run inference on a .trt file, I’d really appreciate if you could direct me to it.


• Hardware Platform (Jetson / GPU): Jetson TX2
• DeepStream Version: 4 (it’s the one that came with Jetpack 4.3)
• JetPack Version (valid for Jetson only): 4.3
• TensorRT Version: 6.0.1


Please find image example in this folder:




Thanks for the suggestion. As I am using deepstream 4, I used the following


However, I keep running into the following error

One element could not be created. Exiting.

The command I am using is

./deepstream-image-decode-app ~/my_image.jpg


Would you mind to update the path into absolutely path and try it again?

I did try that but I get the same error. Is it expecting an image of a specific size?


We are checking this issue with our internal team.
Will update more information later.



Have you installed the dependencies first?

$ sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev

Based on the log, it looks like some GStreamer component cannot be well-created.
Please make sure you have all the requirement installed first.



We have confirmed that this sample can run correctly on Deepstream v4.0 and v5.0.

$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-image-decode-test/
$ make
$ ./deepstream-image-decode-app /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.mjpeg

Please let us know if you still meet an issue.