Confusing about what type of image supported by Deepstream?

I have a .jpeg image and a .jpg image. I can run inference for one .jpeg image but can not run inference for .jpg image. As my understanding, .jpg and .jpeg format are the same. The different between my images is image size, .jpg with bigger size.

What kind of images supported by Deepstream? It is restricted by image size? Does Deepstream support .png image?
Thanks.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am using Jetpack 5.1, deepstream 6.2 on Jetson Xavier NX 8 GB, Tensorrt 8.5.2.2

what is the whole media pipeline? how did you decode the jpeg/jpg? could you share the error command and log?
if using nvjpegdec, please refer to nvjpegdec, there is no image size limitation.
about png, you can use gstreamer pngdec plugin.

I used example for YOLOv7 here yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub

I changed only in deepstream config file

[source0]
enable=1
#type=3 - video
type=2
#uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
uri=file:///home/jetson/old_repo/DeepStream-Yolo/0_PIL.jpg
#num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#type=2 - display 
type=1
sync=0
gpu-id=0
nvbuf-memory-type=0

I can run on PC, but can’t on Jetson xavier.

could you share the output of the following command? seemed decoding jpg failed.
gst-launch-1.0 filesrc location= /home/jetson/old_repo/DeepStream-Yolo/0_PIL.jpg ! nvjpegdec ! fakesink

Here is the output

etson@jetson-desktop:~/old_repo/DeepStream-Yolo-old$ gst-launch-1.0 filesrc location=/home/jetson/old_repo/DeepStream-Yolo-old/0.jpg ! nvjpegdec ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.000677763
Setting pipeline to NULL ...
Freeing pipeline ...

Is there different between Gstreamer and Deepstream in PC and board? I used the same version for both.

from the log, decoding .jpg succeeds on Jetson. noticing you can’t run inference for .jpeg. could you share the whole log?

The error as follow:

Deserialize yoloLayer plugin: yolo
0:00:20.571198456 2283269 0xaaaae1a18b60 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 25200x4         
3   OUTPUT kFLOAT detection_scores 25200           
4   OUTPUT kFLOAT detection_classes 25200           

0:00:20.676990821 2283269 0xaaaae1a18b60 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
0:00:20.859112030 2283269 0xaaaae1a18b60 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/config_infer_primary_yoloV7.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	
**PERF:  0.00 (0.00)	
** INFO: <bus_callback:239>: Pipeline ready


(deepstream-app:2283269): GLib-GObject-WARNING **: 15:40:21.826: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
** INFO: <bus_callback:225>: Pipeline running

ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:262>: Received EOS. Exiting ...

Quitting
App run failed

I dont know the reason. When I use original .jpg, I can not run inference, but I try using some other .jpg, I can run inference. What is difference here between .jpg images? It happens only on Jetson xavier.

  1. could you share the output of “gst-launch-1.0 uridecodebin uri=file:///home/jetson/old_repo/DeepStream-Yolo/0_PIL.jpg ! fakesink” on Jetson?
  2. could you share that 0_PIL.jpg? Thanks! we will try to reproduce.

Thanks.
Here are the output of the first command

jetson@jetson-desktop:~/old_repo/DeepStream-Yolo-old$ gst-launch-1.0 uridecodebin uri=file:///home/jetson/old_repo/DeepStream-Yolo-old/0.jpg ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.000997126
Setting pipeline to NULL ...
Freeing pipeline ...

I attached .jpg image and config file. I can not run inference for this image.
image_config.zip (159.5 KB)

0:00:08.145733085 e[335m33748e[00m 0xaaaae7c1a640 e[37mDEBUG e[00m e[00m nvvideoconvert gstnvvideoconvert.c:2537:gst_nvvideoconvert_accept_caps:<nvvidconv_elem>e[00m could not transform video/x-raw(memory:NVMM), format=(string)Y444
it is a known issue, it is a YUV 4:4:4 format jpeg, nvvideoconvert does not support Y444 format.

It means that some .jpg image have format YUV 4:4:4 and I can not run inference with these images. Some other images with different format can be run inference.

yes, this bug will be fixed in the following version.

1 Like

Thank you so much for supporting me.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.