Can not run inference for 1 image in Jetson Xavier NX, but I can run on PC?

I can not run inference .engine model for 1 image in Jetson Xavier NX, but I can run on PC with the same config (use right .engine model for each platform). Here are my 2 config files
deepstream_app_config.txt (963 Bytes)
config_infer_primary_yoloV7.txt (647 Bytes)

I checked path of image correctly, but I got an error an Jetson Xavier NX as follow:

Deserialize yoloLayer plugin: yolo
0:00:20.571198456 2283269 0xaaaae1a18b60 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 25200x4         
3   OUTPUT kFLOAT detection_scores 25200           
4   OUTPUT kFLOAT detection_classes 25200           

0:00:20.676990821 2283269 0xaaaae1a18b60 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
0:00:20.859112030 2283269 0xaaaae1a18b60 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/config_infer_primary_yoloV7.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	
**PERF:  0.00 (0.00)	
** INFO: <bus_callback:239>: Pipeline ready


(deepstream-app:2283269): GLib-GObject-WARNING **: 15:40:21.826: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
** INFO: <bus_callback:225>: Pipeline running

ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:262>: Received EOS. Exiting ...

Quitting
App run failed

Thanks

1 Like

@mchi Please help us with the issue. Thank you.

What’s the Jetpack version in use? Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

@yuweiw
My device: Jetson Xavier NX 8 GB
My Jetpack is 5.1.1.
DeepStream version: 6.2
TensorRT Version: 8.5.2.2
CUDA: 11.4.315
CuDNN:8.6.0.166

Please check for me. I can inference 1 image on Docker container in PC, but can not on Jetson Xavier NX with the above config files.

I saw that ouput has a part:

INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 25200x4         
3   OUTPUT kFLOAT detection_scores 25200           
4   OUTPUT kFLOAT detection_classes 25200 

I think it do inference but there is a problem. In the config I tried to write output to

gie-kitti-output-dir=/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo/output_today

I created save folder, but there is anything.
Thanks

1.Could you try to add GST_DEBUG=3 before the command you used to open more log?
2.We have deepstream yolov7 demo app, you can refer to the link below:
https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/deepstream_yolo

@yuweiw
Thanks. Here are more detailed error.

jetson@jetson-desktop:~/losslessAI_OB/DeepStream-Yolo$ GST_DEBUG=3 deepstream-app -c deepstream_app_config.txt
0:00:08.216848769 41519 0xaaab100c9760 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/jetson/losslessAI_OB/DeepStream-Yolo/yolov7.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT output          25200x6         

0:00:08.293340109 41519 0xaaab100c9760 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/jetson/losslessAI_OB/DeepStream-Yolo/yolov7.onnx_b1_gpu0_fp32.engine
0:00:08.307682843 41519 0xaaab100c9760 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/jetson/losslessAI_OB/DeepStream-Yolo/config_infer_primary_yoloV7.txt sucessfully
0:00:08.312735074 41519 0xaaab100c9760 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
0:00:08.314694417 41519 0xaaab100c9760 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	
**PERF:  0.00 (0.00)	
** INFO: <bus_callback:239>: Pipeline ready


(deepstream-app:41519): GLib-GObject-WARNING **: 10:55:39.805: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
0:00:08.346496709 41519 0xaaab10519360 FIXME           videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:08.347024201 41519 0xaaab10519360 FIXME           videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:08.347321260 41519 0xaaab10519360 WARN            videodecoder gstvideodecoder.c:2425:gst_video_decoder_chain:<nvjpegdec0> Received buffer without a new-segment. Assuming timestamps start from 0.
0:00:08.359897004 41519 0xaaab10519360 WARN                GST_PADS gstpad.c:4231:gst_pad_peer_query:<nvjpegdec0:src> could not send sticky events
0:00:08.360847188 41519 0xaaab10519360 WARN                GST_PADS gstpad.c:4231:gst_pad_peer_query:<nvjpegdec0:src> could not send sticky events
0:00:08.364145741 41519 0xaaab10519360 WARN                typefind gsttypefindelement.c:1228:gst_type_find_element_loop:<typefind> error: Internal data stream error.
0:00:08.364213645 41519 0xaaab10519360 WARN                typefind gsttypefindelement.c:1228:gst_type_find_element_loop:<typefind> error: streaming stopped, reason not-negotiated (-4)
** INFO: <bus_callback:225>: Pipeline running

ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:262>: Received EOS. Exiting ...

Quitting
0:00:09.398346551 41519 0xaaab100c9760 ERROR                GST_BUS gstbus.c:1066:gst_bus_remove_watch:<bus1> no bus watch was present
App run failed

Please help me.

Just from the log, the nvjpegdec reports error. So could you try the following two methods?

  1. change the source to some other jpeg picture
  2. change the source to a video

@yuweiw I can run successfully with video, here is a part of config for video

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

but when I change other jpg picture, the error is the same as above.

OK. There are some problems with the process of jpeg on jetson 5.1.1, deepstream 6.2. Could you try to use 5.1 first? Like the topic below:
https://forums.developer.nvidia.com/t/jpeg-parameter-struct-mismatch-error-in-deepstream-transfer-learning-app-ds-v6-2/250013

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.