DeepStream SDK 5.0 with Nvidia GTX 960(4GB RAM) - deepstream-app throwing Bus error

• Hardware Platform (Jetson / GPU): GTX 960
• DeepStream Version: 5.0
• Docker: nvcr.io/nvidia/deepstream:5.0-dp-20.04-triton

Hi, I tried to run DeepStream 5.0.0 on a GPU - GTX 960(4GB RAM)
I think I successfully install it since when I run

./deepstream-app --version-all
I got:
deepstream-app version 5.0.0
DeepStreamSDK 5.0.0
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 7.0
cuDNN Version: 7.6
libNVWarp360 Version: 2.0.1d3
When I run
./deepstream-app -c ../../../../samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
It throws an error:
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: …/rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1452 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt
0:00:00.291331821 1850 0x55d794891180 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt failed
0:00:00.291356652 1850 0x55d794891180 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt failed, try rebuild
0:00:00.291364972 1850 0x55d794891180 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:759 FP16 not supported by platform. Using FP32 mode.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1177 FP16 not supported by platform. Using FP32 mode.

INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:32.033638747  1850 0x55d794891180 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp32.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

0:00:32.045423536  1850 0x55d794891180 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:181>: Pipeline ready


** INFO: <bus_callback:167>: Pipeline running

KLT Tracker Init
Bus error (core dumped)

My Current Config:
# Copyright © 2020 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
num-sources=1
uri=rtsp://admin:pass12345@192.168.0.116
gpu-id=0

[streammux]
gpu-id=0
batch-size=1
live-source=1
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial

[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
model-engine-file=../../models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=config_infer_primary_peoplenet.txt

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[tracker]
enable=1
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
ll-config-file=../deepstream-app/tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=0

I’m using PeopleNet pretrained model
How can I fix it?
Thanks

Could you check the back trace of the core dump file?
BTW, may I know why you need to use triton docker, do you plan to use nvinferserver plugin?

Maybe have loaded a engine file incompatible

the first error message: “Serialization Error in verifyHeader: 0 (Magic tag does not match)”

This engine file may need to be rebuilt , i think engine files are generally related to the hardware platform

1 Like

Hi @bcao
Here is gdb output:
root@e038dc170dbc:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app# gdb -args ./deepstream-app -c …/…/…/…/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Copyright © 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type “show copying”
and “show warranty” for details.
This GDB was configured as “x86_64-linux-gnu”.
Type “show configuration” for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type “help”.
Type “apropos word” to search for commands related to “word”…
Reading symbols from ./deepstream-app…(no debugging symbols found)…done.
(gdb) run
Starting program: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app/deepstream-app -c …/…/…/…/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
warning: Error disabling address space randomization: Operation not permitted
warning: Probes-based dynamic linker interface failed.
Reverting to original interface.

process 2011 is executing new program: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app/deepstream-app
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7f7ec6514700 (LWP 2015)]
[New Thread 0x7f7ea3185700 (LWP 2016)]
[New Thread 0x7f7e97fff700 (LWP 2017)]
[New Thread 0x7f7e93ffe700 (LWP 2018)]
[New Thread 0x7f7e87fff700 (LWP 2019)]
[New Thread 0x7f7e7bfff700 (LWP 2020)]
[New Thread 0x7f7e6ffff700 (LWP 2021)]
[New Thread 0x7f7e63fff700 (LWP 2022)]
[New Thread 0x7f7e57fff700 (LWP 2023)]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
[New Thread 0x7f7e4bfff700 (LWP 2024)]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: INVALID_STATE: std::exception
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1452 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt
0:00:00.789190573  2011 0x562804591920 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt failed
0:00:00.789216670  2011 0x562804591920 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt failed, try rebuild
0:00:00.789226003  2011 0x562804591920 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:759 FP16 not supported by platform. Using FP32 mode.
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1177 FP16 not supported by platform. Using FP32 mode.
[New Thread 0x7f7e3ffff700 (LWP 2025)]



INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:32.230374983  2011 0x562804591920 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp32.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

[New Thread 0x7f7e33fff700 (LWP 2026)]
[New Thread 0x7f7e2fffe700 (LWP 2027)]
[New Thread 0x7f7e2bffd700 (LWP 2028)]
0:00:32.239327838  2011 0x562804591920 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt sucessfully
[New Thread 0x7f7e17fff700 (LWP 2029)]
[New Thread 0x7f7e13ffe700 (LWP 2030)]
[New Thread 0x7f7e07fff700 (LWP 2031)]
[New Thread 0x7f7dfbfff700 (LWP 2032)]
[New Thread 0x7f7deffff700 (LWP 2033)]

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:181>: Pipeline ready

[New Thread 0x7f7de3fff700 (LWP 2034)]
[New Thread 0x7f7dd7fff700 (LWP 2035)]
[New Thread 0x7f7dcbfff700 (LWP 2036)]

[New Thread 0x7f7dbffff700 (LWP 2038)]
[New Thread 0x7f7db3fff700 (LWP 2039)]
[New Thread 0x7f7dafffe700 (LWP 2040)]
[New Thread 0x7f7d9bfff700 (LWP 2041)]
[New Thread 0x7f7d97ffe700 (LWP 2042)]
**PERF: 0.00 (0.00)	
[New Thread 0x7f7d8bfff700 (LWP 2043)]
[New Thread 0x7f7d7ffff700 (LWP 2044)]
** INFO: <bus_callback:167>: Pipeline running

[New Thread 0x7f7d73fff700 (LWP 2045)]
[New Thread 0x7f7d67fff700 (LWP 2046)]
[New Thread 0x7f7d5bfff700 (LWP 2047)]
KLT Tracker Init

Thread 33 "deepstream-app" received signal SIGBUS, Bus error.
[Switching to Thread 0x7f7d5bfff700 (LWP 2047)]
__memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:423
423	../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory.
(gdb) run

BTW, may I know why you need to use triton docker, do you plan to use nvinferserver plugin?
Sure, I need to use nvinferserver

Hi @liangjia1989,
I can run successfully DeepStream 5 with the same config file, the docker and the same error message on another computer(GPU 1070Ti)

Do you have any idea?

There is no useful info, I mean could you share the backtrace which is included in your core dump file when the app crash.

Or if you try to use the onnx model, the tensor-rt engine model file is used for deployment and is generated on a specific nvidia platform. You may need to regenerate one, because the engine file that I used to generate on a machine is different. It is not usable on external machines and needs to be regenerated.
Hope it helps you.

Hey, have you fixed the error?

Hi @liangjia1989, @bcao,
Actually, I think I can’t fix that error. I try to use another PC with another GPU. DeepStream 5 works perfectly.
Thank you guys