TensorRT Model building issues. Can TensorRT be updated?

Description

Both the Jetson Nano 2gb and 4gb both fail on building my custom model. I have put my question here as from my initial research, the issue seems to be the TensorRT version.

I am currently running YOLOv8/v5 and MMPose with no issues on my jetson, in the building or inference steps, but my own custom pose classifier fails on trying to build the model with the following error:

10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/Constant_1_output_0...(Unnamed Layer* 105) [Shuffle]]}.)
2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )

This does not occur on my windows machine which has the latest TensorRT and CUDA.

I’ve been trying to fix this issue but results online have led me to believe that the problem is with the TensorRT version?

And that I can’t update:

There are no new Jetpack releases with new TensorRT.

I honestly would like to avoid just ripping different parts from my network to try and fix this, I am assuming that it’s an issue with one of the preprocessing nodes in my network.

Is there nothing I can do other than remove nodes from my network? No way to update TensorRT or fix this issue?

Environment

TensorRT Version: 8.2
GPU Type: Jetson Nano 2gb (also tried on 4gb)
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.2.1
Operating System + Version: Jetpack 4.6.1
Python Version (if applicable): N/A (Trained with Python 3.9.9)
TensorFlow Version (if applicable): N/A
PyTorch Version (if applicable): Trained with 2.2.1
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

Export Model

I’ve exported my model as an onnx without training for reproduction.

    onnx_program = torch.onnx.export(
        pose_classifier, 
        torch.randn(1, 1, 17, 2, dtype=torch.float32),
        filename,
        input_names = ['input'],
        output_names = ['output'], 
        export_params = True,
        do_constant_folding = True,
    )

Steps To Reproduce

I’m building with the C++ TensorRT library, it should happen when you try and convert this to a tensorrt file, so you can use trtexec to try and built it yourself on a jetson nano. As stated above I had no issues building on a windows device.

Hello @stanleyhonour, Please start a new topic for your issue. This will help the original poster get help on his topic.

Thanks,
Tom

Having been a moderator myself for many years, I understand about off topic comments. Being completely new to LLm design issues, what is the correct header topic, where should I put my newbie issues under? Two problems: 1) in general a lack of documentation oriented for new users, aka step by step getting started. 2) specifically creating new dataset training documents, syntax, punctuation, control code words, tags and details about document formatting style. Trouble is that web sources are inconsistent with the syntax style, and different LLMs are written somewhat differently. I have not looked at the Discord site yet, but I will do that soon.

Just to let you know, I have spent the last 5 months learning JSON coding. In my recent experiments, it appears that the ChatRTX dataset parser can read Json files converted to txt files, (but not in Unicode?) That is, I do not see any error flags, thanks all for adding Lama 13B last may. I am just starting dataset tests with my 3080 line Json code.