Convert the ONNX model to the TensorRT engine

In GitHub - Daniil-Osokin/lightweight-human-pose-estimation.pytorch: Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper., the open source project, there is a pre-trained pytorch model: checkpoint_iter_370000.pth, which I successfully converted to onnx and then moved on to the TensorRT engine.
During the conversion to the TensorRT engine, the terminal keeps returning warnings:
[07/17/2024-16:01:21] [TRT] [W] Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
[07/17/2024-16:01:21] [TRT] [W] Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
[07/17/2024-16:01:21] [TRT] [W] Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
[07/17/2024-16:01:21] [TRT] [W] Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
[07/17/2024-16:01:21] [TRT] [W] Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
[07/17/2024-16:01:21] [TRT] [W] Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
What do I need to do to eliminate these warnings.

The code I converted the onnx model to the TensorRT engine is as follows:

import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

def build_engine(onnx_file_path, engine_file_path):
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(
1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:

    config = builder.create_builder_config()
    config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 << 30)  # 1GB

    with open(onnx_file_path, 'rb') as model:
        if not parser.parse(model.read()):
            for error in range(parser.num_errors):
                print(parser.get_error(error))
            return None

    serialized_engine = builder.build_serialized_network(network, config)
    with open(engine_file_path, "wb") as f:
        f.write(serialized_engine)

def main():
onnx_file_path = “human-pose-estimation.onnx”
engine_file_path = “human-pose-estimation.trt”
build_engine(onnx_file_path, engine_file_path)
print(“TensorRT engine has been built and saved to”, engine_file_path)

if name == “main”:
main()

Hi,

This is a known issue and the message is a harmless warning.
In our latest JetPack 6.0 GA, trtexec won’t show with this message anymore.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.