Hi,
I’m using a Jetson AGX Orin 64 GB with JetPack 6.0 to generate a TensorRT engine to target a DLA core.
My model contains dynamic outputs which is decided at runtime (for example : It depends on number of person in the image)
I am able to execute the quantized (INT8) model on GPU, while targeting to DLA ,getting below error
[valueCloner.cpp::replace::31] Error Code 1: Internal Error (Tensor is not known to this ValueCloner)
Is there a possible way to fix this issue, or has it been addressed and updated in the latest Jetson package or TensorRT?
Hi,
Have you tried if the model can work with the GPU backend?
If GPU can work fine, could you share the ONNX file with us for checking?
Thanks.
Hello @AastaLLL ,
Yes, the model is running successfully on the hardware using the GPU backend.
Since it is proprietary model, we do not have the authority to share ; however, I can provide the necessary information regarding output logs or some other details.
Let me know if you need any further information required
Hi,
This is a known issue, please check the below topic for more info:
Descrption
I tried launching the pre-built trtexec tool to generate trt engine for my onnx model (derived from torchvision Faster RCNN). My computing device is a Jetson Orin NX with two DL accelerators. Engine generation and inference was successful at FP16 precision without DLA. However, when DLA is enabled, trtexec outputs the following error during engine generation:
[09/08/2022-12:15:55] [E] Error[1]: [valueCloner.cpp::replace::31] Error Code 1: Internal Error (Tensor is not known to this …
However, due to the limited resources, this issue hasn’t been fixed yet.
We have asked our internal team to check it further and will update more with you later.
Thanks.
Hi ,
I have uploaded a dummy ONNX model to replicate the error on your side:
onnx_model.zip (36.0 MB)
We are encountering this error during both FP16 and INT8 conversion while targeting DLA.
Please check and provide a solution.
Hi,
Thanks for sharing the model.
We tested it with TensorRT 10.7 and it can be inferred on DLA without error.
Please find the below comment to upgrade:
Hi,
Could you try it again?
We downloaded the TensorRT 10.7 and tested the “test_module.onnx”.
It can work successfully on our Orin + JetPack 6.2 environment.
Here are the detailed steps for your reference:
Download the “TensorRT 10.7 GA for L4T and CUDA 12.6 TAR Package” from here .
Install
$ tar -xzvf TensorRT-10.7.0.23.l4t.aarch64-gnu.cuda-12.6.tar.gz
$ export LD_LIBRARY_PATH=${PWD}/TensorRT-10.7.0.23/lib:$LD_LIBRARY_PATH
Verify. Please make sure the TensorRT version is upgraded to t…
Thanks.
Hello @AastaLLL
Thank you for the solution.
We’d like to confirm one thing from your side:
you mentioned you were able to inference the our model on the DLA, but could you please clarify whether it was only on the DLA or in (DLA+GPU) hybrid mode?.