Unable to do inference on jpg files

• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version - 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) - 515.65.01
• Issue Type( questions, new requirements, bugs) - questions

I’m trying to run inference on multiple jpg files. Attached the code for reference. I don’t see any error in the debugs, there are warnings though. The pipeline is not running. Attached the logs as well. Attached sample data in the zip. I referenced the sample app - sample_apps/deepsteam_image_decode_test for usage of inference over a single image.
Please let me know the issue with the pipeline.
00.zip (67.5 KB)

generate.py (8.8 KB)
run.txt (9.2 KB)

Any lead on this? Thanks.

1.Did the deepsteam_image_decode_test.c run well in your env?
2.Could you try to set the sink to fakesink or refer the link below to set the sink to filesink?
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/30

  1. Just checked, deepsteam_image_decode_test.c is not running in my env. It’s throwing the exact same error.
  2. Tried both fakesink and filesink, still same error.

Could it be because of the jpg file that I pass?

Yes. It’s related to this jpeg picture. Gstreamer origin code cannot parse your jpeg file. So you can open a topic on Gstreamer forum. https://gstreamer.freedesktop.org/
You can try to change the decode plugin to nvjpegdec instead of jpegparser+nvv4l2decoder at present.

Thanks! The same pipeline works with a proper .jpeg file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.