Hey folks ,
I am trying this reference link GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson but while executing the sample segmentation app i am getting the error logged below
$ sudo deepstream-app -c deepstream_app_source1_segmentation.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/../../models/tao_pretrained_models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine open error
0:00:01.527215645 12636 0x55d9caaba030 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/../../models/tao_pretrained_models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine failed
0:00:01.527271983 12636 0x55d9caaba030 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/../../models/tao_pretrained_models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:01.527285679 12636 0x55d9caaba030 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: pyramid_crop_and_resize_mask: Unsupported operation _MultilevelCropAndResize_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:358 Failed to build network, error in model parsing.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:724 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:01.668165417 12636 0x55d9caaba030 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
terminate called after throwing an instance of 'nvinfer1::InternalError'
what(): Assertion mRefCount > 0 failed.
Aborted
Where the config file looks like deepstream_reference_apps/deepstream_app_source1_segmentation.txt at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
[tiled-display]
enable=1
rows=1
columns=1
width=960
height=540
gpu-id=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://../../streams/sample_qHD.mp4
gpu-id=0
[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
## Set muxer output width and height
width=960
height=540
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
display-mask=1
display-bbox=0
display-text=0
[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
# Replace the infer primary config file when you need to
# use other detection models
# model-engine-file=../../models/tao_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine
config-file=config_infer_primary_peopleSegNet.txt
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0
[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../deepstream-app/config_tracker_IOU.yml
ll-config-file=../deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1
[tests]
file-loop=0
Any help and suggestion would be much appreciated
Environment
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.0
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3