I have a .jpeg image and a .jpg image. I can run inference for one .jpeg image but can not run inference for .jpg image. As my understanding, .jpg and .jpeg format are the same. The different between my images is image size, .jpg with bigger size.
What kind of images supported by Deepstream? It is restricted by image size? Does Deepstream support .png image?
Thanks.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
what is the whole media pipeline? how did you decode the jpeg/jpg? could you share the error command and log?
if using nvjpegdec, please refer to nvjpegdec, there is no image size limitation.
about png, you can use gstreamer pngdec plugin.
could you share the output of the following command? seemed decoding jpg failed.
gst-launch-1.0 filesrc location= /home/jetson/old_repo/DeepStream-Yolo/0_PIL.jpg ! nvjpegdec ! fakesink
etson@jetson-desktop:~/old_repo/DeepStream-Yolo-old$ gst-launch-1.0 filesrc location=/home/jetson/old_repo/DeepStream-Yolo-old/0.jpg ! nvjpegdec ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.000677763
Setting pipeline to NULL ...
Freeing pipeline ...
Deserialize yoloLayer plugin: yolo
0:00:20.571198456 2283269 0xaaaae1a18b60 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT num_detections 1
2 OUTPUT kFLOAT detection_boxes 25200x4
3 OUTPUT kFLOAT detection_scores 25200
4 OUTPUT kFLOAT detection_classes 25200
0:00:20.676990821 2283269 0xaaaae1a18b60 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
0:00:20.859112030 2283269 0xaaaae1a18b60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/config_infer_primary_yoloV7.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready
(deepstream-app:2283269): GLib-GObject-WARNING **: 15:40:21.826: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
** INFO: <bus_callback:225>: Pipeline running
ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:262>: Received EOS. Exiting ...
Quitting
App run failed
I dont know the reason. When I use original .jpg, I can not run inference, but I try using some other .jpg, I can run inference. What is difference here between .jpg images? It happens only on Jetson xavier.
jetson@jetson-desktop:~/old_repo/DeepStream-Yolo-old$ gst-launch-1.0 uridecodebin uri=file:///home/jetson/old_repo/DeepStream-Yolo-old/0.jpg ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.000997126
Setting pipeline to NULL ...
Freeing pipeline ...
I attached .jpg image and config file. I can not run inference for this image. image_config.zip (159.5 KB)
0:00:08.145733085 e[335m33748e[00m 0xaaaae7c1a640 e[37mDEBUG e[00m e[00m nvvideoconvert gstnvvideoconvert.c:2537:gst_nvvideoconvert_accept_caps:<nvvidconv_elem>e[00m could not transform video/x-raw(memory:NVMM), format=(string)Y444
it is a known issue, it is a YUV 4:4:4 format jpeg, nvvideoconvert does not support Y444 format.
It means that some .jpg image have format YUV 4:4:4 and I can not run inference with these images. Some other images with different format can be run inference.