Okay so I’ve used the deepstream sdk method using deepstream-test1-usbcam-rtsp-out (deepstream_test_1_usb.py)
I’ve approprately updated the config file which is shown below:
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
offsets=0.0;0.0;0.0
# model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
# proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
tlt-model-key=tlt_encode
tlt-encoded-model=../../../../samples/models/Primary_Detector/resnet18_detector.etlt
model-engine-file=../../../../samples/models/Primary_Detector/resnet18.etlt_b1_gpu0_fp16.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
infer-dims=3;1088;1920
batch-size=1
network-mode=2
network-type=0
num-detected-classes=3
interval=0
gie-unique-id=1
uff-input-order=0
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
uff-input-blob-name=input_1
model-color-format=0
maintain-aspect-ratio=0
output-tensor-meta=0
#scaling-filter=0
#scaling-compute-hw=0
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
minBoxes=3
group-threshold=1
I’ve placed the etlt and labels in the correct directory and tried to run the program. The error I got is shown below:
Creating Pipeline
Creating Source
Creating Video Converter
Creating H264 Encoder
Creating H264 rtppay
Playing cam /dev/video0
Adding elements to Pipeline
Linking elements in the Pipeline
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
Starting pipeline
Opening in BLOCKING MODE
0:00:00.503103662 803 0x7f78005b60 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/dli_apps/deepstream-test1-usbcam-rtsp-out/../../../../samples/models/Primary_Detector/resnet18.etlt_b1_gpu0_fp16.engine open error
0:00:02.534765217 803 0x7f78005b60 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/dli_apps/deepstream-test1-usbcam-rtsp-out/../../../../samples/models/Primary_Detector/resnet18.etlt_b1_gpu0_fp16.engine failed
0:00:02.535991850 803 0x7f78005b60 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/dli_apps/deepstream-test1-usbcam-rtsp-out/../../../../samples/models/Primary_Detector/resnet18.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:02.536045966 803 0x7f78005b60 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/dli_apps/deepstream-test1-usbcam-rtsp-out/../../../../samples/models/Primary_Detector/resnet18_detector.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.022170464 803 0x7f78005b60 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
Not sure what I missed and why deepstream doesn’t build the engine for me on the Jetson nano.
Jetson Nano Version:
Deepstream version is 6.0.1
Jetpack version is 4.6.1-b110
Tensorrt version is 8.2.1.8-1+cuda10.2
model: detectnetv2 using resnset18
PC:
tao toolkit version on pc for model export : 4.0.0-tf1.15.5
Additionally, since I’m allowing deepstream sdk generate the engine on the nano, there shouldn’t be an issue with incompatibility with tensorrt and cuda since it’s using the versions based on the nano.
Update
So i reflashed my jetson nano with deepstream 5.1 since detectnet_V2 tlt models are apparently compatible with that version of deepstream. The deepstream app works but again the exact same error as before appears in failing to open TLT encoded model file. My question: ** How can I let the jetson nano see the tlt-model-key?** I have a feeling it’s not opening the etlt file because of it, does the jetson need to install additional libraries? should I be connecting it the NGC cloud? etc…