Skipping tactic 0x0000000000000000 due to Myelin error: Platform (Cuda) error


It was converted normally in ubuntu 20.04 with rtx3090, but an error occurs when converting the clrnet onnx file to tensorrt using trtexec in jetson. Is this a version issue?


jetson orin r35.1 linux
rtx3090 : x86-64 ubuntu 20.04 docker
TensorRT Version: jetson : 8.5.0 rtx3090 : 8.5.1
Nvidia Driver Version: rtx3090 : 470.103.01
CUDA Version: jetson : cuda-11.8, rtx3090 : cuda-11.3
CUDNN Version: jetson : 8.5.0, rtx3090 : 8.6.0
Operating System + Version: ubuntu 20.04.5
Python Version (if applicable): 3.8
PyTorch Version (if applicable): jetson : torch1.12.0+cu114( builded from source), rtx3090 : 1.12.0+cu113 ( install torch wheel)
Baremetal or Container (if container which image + tag):
jetson :
rtx3090 : nvidia/cuda:11.4.2-cudnn8-runtime-ubuntu20.04

Relevant Files

reference github :

netron svg :

onnx file :

Steps To Reproduce

polygraphy surgeon sanitize agri_r18.onnx --fold-constants --output output/agri_r18.onnx.poly --no-onnxruntime-shape-inference
/usr/src/tensorrt/bin/trtexec --onnx=output/agri_r18.onnx.poly.jetson --saveEngine=output/agri_r18.engine.jetson --verbose > ./trtexec_log.txt

Hi ,
We recommend you to check the supported features from the below link.

You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation


1 Like

It doesn’t seem to be an onnx-tensorrt operator support issue, as my guess is that the rtx3090 server converts fine.

jetson trtexec log :
[11/22/2022-11:04:10] [V] [TRT] *************** Autotuning format combination: Float(12288,64,1,1), Float(12288,64,1) → Float(14976,78,1), Float(13824,72,1) ***************
[11/22/2022-11:04:10] [V] [TRT] --------------- Timing Runner: {ForeignNode[onnx::Slice_336…Add_659]} (Myelin)
[11/22/2022-11:04:11] [W] [TRT] Skipping tactic 0x0000000000000000 due to Myelin error: Platform (Cuda) error
[11/22/2022-11:04:11] [V] [TRT] Fastest Tactic: 0xd15ea5edd15ea5ed Time: inf
[11/22/2022-11:04:11] [V] [TRT] Deleting timing cache: 516 entries, served 967 hits since creation.
[11/22/2022-11:04:11] [E] Error[10]: [optimizer.cpp::computeCosts::3679] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[onnx::Slice_336…Add_659]}.)
[11/22/2022-11:04:11] [E] Error[2]: [builder.cpp::buildSerializedNetwork::675] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[11/22/2022-11:04:11] [E] Engine could not be created from network
[11/22/2022-11:04:11] [E] Building engine failed
[11/22/2022-11:04:12] [E] Failed to create engine from model or file.
[11/22/2022-11:04:12] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8500] # trtexec --onnx=output/agri_r18.onnx.poly.jetson --saveEngine=output/agri_r18.engine.jetson --verbose

onnx::Slice is supported operator.


Would you please provide access to download the ONNX model to try on our side to better debug.

Thank you.

Ok, I will share that file. onnx.


We could not reproduce the similar error on latest TensorRT version 8.5.1.

[12/06/2022-06:16:31] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8501] # trtexec --onnx=agri_r18.onnx --verbose --workspace=20000

Please try on the latest TensorRT version. Also available as NGC container.

Thank you.

As I left the test environment in the first post, when running on NGC Conatiner on x86, it comes out normally without errors. The error comes from jetson orin.
But on jetson orin the last tensorrt version is 8.5.0.
Which l4t image supports 8.5.1 in NGC Container?

How can I get tensorrt v8.5.1 for jetson orin?
Should I build TensorRT-OSS? or Which Container should I use?


We are moving this post to the Jetson Orin NX forum to get better help.

Thank you.


We don’t have GA TensorRT 8.5 package for Orin right now.
If you just want to verify the model with TensorRT, below is a DP JetPack release that can be used.
(Native setup, the container for TensorRT 8.5 is not available as well.)

For production, please wait for our next JetPack 5.1 release to get the TensorRT 8.5.


I already tested with 22.08 Jetson CUDA-X AI Developer Preview.
The preview had the same problem because tensorrt was 8.5.0.
Will the next jetpack release include tensorrt 8.5.1?
Or How can I get the 8.5.1 version of tensorrt for jetson orin?

When is the release date for the new version of jetpack?


Please wait for our next JetPack release.
You can check the release roadmap below:


1 Like

Thanks reply.
I guess I’ll have to wait until at least the end of January.
I hope server and jetson’s library versions match quickly.


Could you share the model with us as well?
We can help to check if your model can work on next Jetpack or not first.


This is my clrnet weight file link


Do you have the ONNX format model? Ex. agri_r18.onnx?


This is my clrnet onnx file link. I created it using last_49.pth file on rtx3090 server.