I have stucked When I use jetson-inference on xavier nx 16GB

Hello, @dusty_nv

./detectnet-console.py images/airplane_0.jpg output_2.jpg

[TRT]    Local timing cache in use. Profiling results in this builder pass will not be stored.
[TRT]    Constructing optimization profile number 0 [1/1].
[TRT]    Reserving memory for activation tensors. Host: 0 bytes Device: 1082804 bytes
[TRT]    =============== Computing reformatting costs
[TRT]    *************** Autotuning Reformat: Float(270000,90000,300,1) -> Float(270000,1,900,3) ***************
[TRT]    --------------- Timing Runner: Optimizer Reformat(Input -> <out>) (Reformat)
[TRT]    2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12660},)
[TRT]    device GPU, failed to build CUDA engine
[TRT]    device GPU, failed to load networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
  File "./detectnet-console.py", line 54, in <module>
    net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference -- detectNet failed to load network

[TRT]    Local timing cache in use. Profiling results in this builder pass will not be stored.
[TRT]    Constructing optimization profile number 0 [1/1].
[TRT]    Reserving memory for activation tensors. Host: 0 bytes Device: 802816 bytes
[TRT]    =============== Computing reformatting costs
[TRT]    *************** Autotuning Reformat: Float(150528,50176,224,1) -> Float(150528,1,672,3) ***************
[TRT]    --------------- Timing Runner: Optimizer Reformat(input_0 -> <out>) (Reformat)
[TRT]    2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12660},)
[TRT]    device GPU, failed to build CUDA engine
[TRT]    device GPU, failed to load networks/MonoDepth-FCN-Mobilenet/monodepth_fcn_mobilenet.onnx
depthNet -- failed to initialize.
depthnet:   failed to initialize depthNet
manager@manager-desktop:~/coding/jetson-inference/build/aarch64/bin$ ./depthnet "images/room_*.jpg" images/test/_depth_room_%i.jpg

Can you tell me how to resolve this errors?

Thank you.

Hi,

There is an issue when using TensorRT on the Xavier NX 16GB platform.
We are actively working on this internally.

Will share more informaiton with you later.
Thanks.

1 Like

Hi,

Thanks for your patience.

The fix for TensorRT on Xavier NX 16GB board is available in the latest JetPack 5.0.1.
Please upgrade environment and trying it again.

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.