Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU):
Desktop: i7-8700, GTX1080 8G, 32GB RAM, 500GB SSD, Ubuntu 22.0.4
Orin: Model P3737
• DeepStream Version
Both 6.4
• JetPack Version (valid for Jetson only)
JetPack 6.0 (rev2)
• TensorRT Version
Desktop: Not Found
Orin: 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)
Desktop: 580.126.09, CUDA Version 13.0
Orin: NVIDIA-SMI 540.3.0 Driver Version: N/A CUDA Version: 12.2
Orin:
• Issue Type( questions, new requirements, bugs)
test3 works well on my desktop(no matter 1, 2, 3, 4, 6, 8 recorded videos simultaneously), but not working on Orin.
test1 and 2 works well on both desktop and jetson
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
cd /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test3
Edit dstest3_pgie_config.txt, use model-engine-file=../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_fp32.engine (line 66) and (line 74) network-mode=0 to repleace original lines.
sudo make CUDA_VER=12.2
./deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264
Then a new dark window appear, and disappear immediately, program crashed.
nvidia@ubuntu:/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test3$ ./deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264
Now playing: file:///opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264,
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:05.331940074 5584 0xaaab0cd59a10 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60
0:00:05.677782254 5584 0xaaab0cd59a10 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_fp32.engine
0:00:05.688669119 5584 0xaaab0cd59a10 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
(deepstream-test3-app:5584): GLib-GObject-WARNING **: 10:24:25.413: g_object_set_is_valid_property: object class ‘GstFileSrc’ has no property named ‘drop-on-latency’
Decodebin child added: decodebin0
Running…
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Frame Number = 0 Number of objects = 16 Vehicle Count = 10 Person Count = 6
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
How could test3 compatible with Orin?


