Hardware Platform: Jetson
DeepStream Version: 6.3
I am trying to use jpg image but it wont work and gives me the following error:
t class 'GstNvJpegDec' has no property named 'DeepStream'
NvMMLiteBlockCreate : Block : BlockType = 256
[JPEG Decode] BeginSequence Display WidthxHeight 4484x2523
ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:291>: Pipeline running
** INFO: <bus_callback:328>: Received EOS. Exiting ...
Quitting
[NvMultiObjectTracker] De-initialized
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
App run failed
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
Jetson
DeepStream Version: 6.3
JetPack Version: 5.2
But what pipeline you run?
This looks like config files for nvinfer, not a pipeline description.
this is the pipeline:
embedding_vector.txt (1.3 KB)
uri=file://13.jpg
Try using absolute path, such as file:///absolute path/13.jpg
I tried that as well and its not working.
Still the same error:
[JPEG Decode] BeginSequence Display WidthxHeight 4484x2523
ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:291>: Pipeline running
** INFO: <bus_callback:328>: Received EOS. Exiting ...
Quitting
[NvMultiObjectTracker] De-initialized
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
App run failed
Since I don’t have your test images and models, I modified the configuration items like below.
The deepstream-app can work normally.
Please check if there is a problem with your model.
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.jpg
config-file=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
that’s weird! it worked for the sample images:
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.jpg
But did not worked for my image. Both of them are jpeg.
What might be the problem?
I am not sure, can you share the picture?
for this, its just show the image without the the bboxes and the detected objects.
At the moment I’m not sure if it’s a bug or a limitation.
You can first try to use the following command to convert
ffmpeg -i 13.jpg 1.jpg
Your image is a Progressive JPEG Image and needs to be converted to baseline.
In fact after converting, I can get bbox
Learn about Progressive JPEG Images, how they differ from a typical JPEG, how they can improve user experience, and how to create them.
junshengy:
ffmpeg -i 13.jpg 1.jpg
Only baseline is supported, you need to use the above method to transcode.
system
Closed
May 12, 2024, 9:13am
18
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.