Convert onnx to engine model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.216.01
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Trying to build a model (.onnx to .engine) the following error occurs:
AttributeError: module ‘tensorrt’ has no attribute ‘Logger’

Follow the code to build the model:
import os
import argparse
from loguru import logger
import tensorrt as trt
import math

TRT_LOGGER = trt.Logger()

def build_engine(onnx_file_path, inputname, engine_file_path=“”, set_input_shape=None):
“”“Takes an ONNX file and creates a TensorRT engine to run inference with”“”
network_creation_flag = 0
network_creation_flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)

with trt.Builder(TRT_LOGGER) as builder, builder.create_network(
    network_creation_flag
) as network, builder.create_builder_config() as config, trt.OnnxParser(
    network, TRT_LOGGER
) as parser, trt.Runtime(
    TRT_LOGGER
) as runtime:
    config.set_memory_pool_limit(
        trt.MemoryPoolType.WORKSPACE, 1 << 28
    )  # 256MiB
    # Parse model file
    if not os.path.exists(onnx_file_path):
        print(
            "ONNX file {} not found, please run yolov3_to_onnx.py first to generate it.".format(
                onnx_file_path
            )
        )
        exit(0)
    print("Loading ONNX file from path {}...".format(onnx_file_path))
    with open(onnx_file_path, "rb") as model:
        print("Beginning ONNX file parsing")
        if not parser.parse(model.read()):
            print("ERROR: Failed to parse the ONNX file.")
            for error in range(parser.num_errors):
                print(parser.get_error(error))
            return None
    # set input shape
    profile = builder.create_optimization_profile()
    logger.debug("total input layer: {}".format(network.num_inputs))
    logger.debug(network.num_outputs)
    output = network.get_output(0)
    logger.debug(output.shape)
    for i in range(network.num_inputs):
        input = network.get_input(i)
    #     assert input.shape[0] == -1
        logger.debug("input layer-{}: {}".format(i, input.name))
    profile.set_shape(inputname, set_input_shape[0], set_input_shape[1], set_input_shape[2])
    config.add_optimization_profile(profile)
    logger.debug("build, may take a while...")

    plan = builder.build_serialized_network(network, config)
    engine = runtime.deserialize_cuda_engine(plan)
    print("Completed creating Engine")
    with open(engine_file_path, "wb") as f:
        f.write(plan)
    return engine

def main(args):
trt_file_name = args.onnx.replace(‘.onnx’, ‘_bs{}.trt’.format(args.batch))
input_shape = [(1, 3, args.size, args.size), (math.ceil(args.batch/2), 3, args.size, args.size), (args.batch, 3, args.size, args.size)]
logger.debug(“set input shape: {}”.format(input_shape))
build_engine(args.onnx, args.name, trt_file_name, input_shape)

if name == “main”:
parser = argparse.ArgumentParser()
parser.add_argument(‘–onnx’, type=str, default=‘/home/eduardo/Devel/Models/Arcface_DS_7.0/w600k_r50.onnx’, help=‘onnx path’)
parser.add_argument(‘-s’, ‘–size’, type=int, default=112, help=‘input shape’)
parser.add_argument(‘-b’, ‘–batch’, type=int, default=1, help=‘max batch size’)
parser.add_argument(‘-n’, ‘–name’, type=str, default=‘input.1’, help=‘input name’)
args = parser.parse_args()
main(args=args)

Notes:

  • The Deepstream python apps are running well.

  • Tesorrt was intallled according to the deepstream 7.0 installation guide:
    sudo apt-get install --no-install-recommends libnvinfer-lean8=8.6.1.6-1+cuda12.0 libnvinfer-vc-plugin8=8.6.1.6-1+cuda12.0
    libnvinfer-headers-dev=8.6.1.6-1+cuda12.0 libnvinfer-dev=8.6.1.6-1+cuda12.0 libnvinfer-headers-plugin-dev=8.6.1.6-1+cuda12.0
    libnvinfer-plugin-dev=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0 libnvinfer-lean-dev=8.6.1.6-1+cuda12.0
    libnvparsers-dev=8.6.1.6-1+cuda12.0 python3-libnvinfer-lean=8.6.1.6-1+cuda12.0 python3-libnvinfer-dispatch=8.6.1.6-1+cuda12.0
    uff-converter-tf=8.6.1.6-1+cuda12.0 onnx-graphsurgeon=8.6.1.6-1+cuda12.0 libnvinfer-bin=8.6.1.6-1+cuda12.0
    libnvinfer-dispatch-dev=8.6.1.6-1+cuda12.0 libnvinfer-dispatch8=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0
    libnvonnxparsers8=8.6.1.6-1+cuda12.0 libnvinfer-vc-plugin-dev=8.6.1.6-1+cuda12.0 libnvinfer-samples=8.6.1.6-1+cuda12.0

  • Checking the Tensorrt installation:

  • dpkg -l | grep nvinfer
    ii libnvinfer-bin 8.6.1.6-1+cuda12.0 amd64 TensorRT binaries
    ii libnvinfer-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT development libraries
    ii libnvinfer-dispatch-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT development dispatch runtime libraries
    ii libnvinfer-dispatch8 8.6.1.6-1+cuda12.0 amd64 TensorRT dispatch runtime library
    ii libnvinfer-headers-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT development headers
    ii libnvinfer-headers-plugin-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT plugin headers
    ii libnvinfer-lean-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT lean runtime libraries
    ii libnvinfer-lean8 8.6.1.6-1+cuda12.0 amd64 TensorRT lean runtime library
    ii libnvinfer-plugin-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT plugin libraries
    ii libnvinfer-plugin8 8.6.1.6-1+cuda12.0 amd64 TensorRT plugin libraries
    ii libnvinfer-samples 8.6.1.6-1+cuda12.0 all TensorRT samples
    ii libnvinfer-vc-plugin-dev 8.6.1.6-1+cuda12.0 amd64 TensorRT vc-plugin library
    ii libnvinfer-vc-plugin8 8.6.1.6-1+cuda12.0 amd64 TensorRT vc-plugin library
    ii libnvinfer8 8.6.1.6-1+cuda12.0 amd64 TensorRT runtime libraries
    ii python3-libnvinfer-dispatch 8.6.1.6-1+cuda12.0 amd64 Python 3 bindings for TensorRT dispatch runtime
    ii python3-libnvinfer-lean 8.6.1.6-1+cuda12.0 amd64 Python 3 bindings for TensorRT lean runtime

Moving to TensorRT forum for better support, thanks.

Any tip? I really appreciate your.

Solution: pip install tensorrt==8.6.1.post1

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.