Please provide the following information when requesting support.
• Hardware jetson tx2
• Network Type resnet18+unet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec fileunet_train_resnet_unet_isbi.txt (2.8 KB)
• How to reproduce the issue ? when moving the trained model to tx2 the problem occured
and the spec file of my trained model is unet_train_resnet_unet_isbi.txt (2.8 KB)
the pgie_unet_config_filepgie_unet_tlt_config(3).txt (2.1 KB)
i was using above website as reference
################################################################################
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
This file has been truncated. show original
#!/usr/bin/env python3
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import sys
This file has been truncated. show original
thank you for your reply, i have followed the readme. But I think the problem is I generated the engine file with trt7.2.3(tao3.0 ) but the trt oss plugin doesn’t seems to have supported version
is there a way to upgrade the trt version in the plugin or downgrade the trt version that in the docker?
It is not related to the trt version of the docker. Because you will run inference in TX2. So, you need to copy etlt model to TX2, and then
generate trt engine in TX2 via tao-converter or
let the deepstream generate trt engine.
We just build trt oss plugin in order to replace the libnvinfer_plugin.so.
Please try to run GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream with the official demo etlt models.
Refer to below topic, I rebuild trt oss plugin and replace it in NX.
Can you rebuild the libnvinfer_plugin.so again ? I can generate the trt engine successfully in NX with this official yolo_v4 etlt model.
Step:
$ git clone -b 21.03 https://github.com/nvidia/TensorRT
$ cd TensorRT/
$ git submodule update --init --recursive
$ export TRT_SOURCE=pwd
$ cd $TRT_SOURCE
$ mkdir -p build && cd build
$ /usr/local/bin/cmake … -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
$ ll /usr/lib/aarch64…
More reference, see YOLOv4 - NVIDIA Docs
1 Like
so i can’t just run through this
and use the engine file it generated?
Yes,
Machine specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the inference environment’s TensorRT or CUDA libraries are updated – including minor version updates or if a new model is generated– new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.
1 Like
thanks for the reply, i have followed your advice and run the tao-converter in tx2 but now the problem
has occurred
can’t understand why it’s related to ascii ?
i solve it by fixing the key value, not careful enough, my bad. Thank you for your reply, appreciated!
system
Closed
October 29, 2021, 1:23pm
11
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.