Hello, i’m setting on xavier NX module.
I got a error when i execute deepstream sample code.
The system is just installed by SDK manager.
But every sample files in the deepstream is not activating with same error.
So, i try to install the system with jetpack 4.5.1. but It fail in install step with SDK Manager.
i can’t do anything with this error.
please, give me the advice.
Platform
HW : xavier NX module & custom carrier board
jetpack 4.6.1
deepstream 6.0.1
Command
deepstream-app -c source1_csi_dec_infer_resnet_int8.txt
Error stats below
(gst-plugin-scanner:29367): GStreamer-WARNING **: 17:27:30.572: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:29367): GStreamer-WARNING **: 17:27:30.716: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:06.027952602 29366 0x20e16870 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:06.057529459 29366 0x20e16870 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:06.057635727 29366 0x20e16870 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:09.948913595 29366 0x20e16870 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:09.974432385 29366 0x20e16870 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:09.974562716 29366 0x20e16870 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:09.974657624 29366 0x20e16870 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:09.974729814 29366 0x20e16870 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
Amycao
April 29, 2022, 2:11am
3
ERROR: [TRT]: 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)
HW : xavier NX module & custom carrier board
I see you use custom carrier board, the issue seems like a platform issue, did you have xavier NX carrier board to try? and when you met this issue, how about memory available?
sudo tegrastats
1 Like
I use this carrier board : FWS101-BB(NEW) > Products | FlexWATCH
As i don’t have another carrier board, i can’t try the other things.
If a memory that you mentioned means ram space, It is enough to execute.
Actually, emmc space is too small, the available space is too small. 300Mb is left.
But, i got same result on a device that have enough space as It mounted sd card.
I hope the info below helps you to solve the problem.
Thanks.
The normal tegrastas is below
RAM 1919/15817MB (lfb 3139x4MB) SWAP 0/7909MB (cached 0MB) CPU [1%@1190,0%@1190,0%@1190,0%@1190,0%@1190,0%@1190] EMC_FREQ 0%@1866 GR3D_FREQ 0%@114 APE 150 MTS fg 0% bg 0% AO@44.5C GPU@45.5C PMIC@50C AUX@44.5C CPU@47.5C thermal@45.7C VDD_IN 3995/3995 VDD_CPU_GPU_CV 313/313 VDD_SOC 1488/1488
tegrastas on Running app is below
RAM 3382/15817MB (lfb 2732x4MB) SWAP 0/7909MB (cached 0MB) CPU [14%@1420,25%@1225,0%@1359,0%@1420,0%@1419,0%@1420] EMC_FREQ 2%@1866 GR3D_FREQ 0%@204 APE 150 MTS fg 0% bg 7% AO@40.5C GPU@41.5C PMIC@50C AUX@41.5C CPU@44C thermal@42.05C VDD_IN 4739/5052 VDD_CPU_GPU_CV 783/986 VDD_SOC 1681/1700
Hello, @Amycao
I found the following while searching for the sentence ‘Unknown embedded device detected’.
In this github issue, there was an issue related to tensorRT and it is said that it will be fixed in the next release.
Can you please take a look at this part?
opened 09:28AM - 18 Mar 22 UTC
bug
Platform: Jetson
triaged
## Description
Hi, I tried to convert onnx to trt on Jetson NX (jetpack 4.6, … trt 8.2.1, cuda 10.2) but got an Internal Error, I googled but cannot find any clue about this error message.
FYI, this onnx can be successfully converted to trt on my Jetson Nano (jetpack 4.5, trt 7.1.3, cuda 10.2) and Windows PC (trt 8.2.1, cuda 11.0)
```
trt version 8.2.1.8
[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[03/18/2022-16:54:18] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)
Traceback (most recent call last):
File "tools/export_trt.py", line 77, in <module>
f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'
```
## Environment
**TensorRT Version**: 8.2.1.8
**NVIDIA GPU**: Jetson NX (jetpack 4.6)
**NVIDIA Driver Version**:
**CUDA Version**: 10.2
**CUDNN Version**: 8.2.1
**Operating System**:
**Python Version (if applicable)**:
**Tensorflow Version (if applicable)**:
**PyTorch Version (if applicable)**:
**Baremetal or Container (if so, version)**:
Thank you.
Hello,
Do you have any advice after checking the memory available?
Thank you.
It’s one TRT issue.
but we never run into this issue with nvidia NX board (nvidia module and nvidia carrier board). with this NX mem set:
RAM 1553/6856MB
It’s your NX mem set not in TRT mem check list for Jetson. as the github error bug you posted, commented as the fix will be available in the next jetpack release.
1 Like
Hello, @Amycao
How can I set the memset exactly like the devkit carrier board?
Thank you.
Hello, @Amycao @WayneWWW
RAM 1919/ 15817MB (lfb 3139x4MB) SWAP 0/7909MB (cached 0MB) CPU [1%@1190,0%@1190,0%@1190,0%@1190,0%@1190,0%@1190] EMC_FREQ 0%@1866 GR3D_FREQ 0%@114 APE 150 MTS fg 0% bg 0% AO@44.5C GPU@45.5C PMIC@50C AUX@44.5C CPU@47.5C thermal@45.7C VDD_IN 3995/3995 VDD_CPU_GPU_CV 313/313 VDD_SOC 1488/1488
It’s a jetson xavier nx 16gb model. Shouldn’t it be compared to the same 16gb model in memset comparison?
The mem set you mentioned is:
If it’s still a mem set problem
How can I set the memset exactly like the devkit carrier board?
Thank you.
Amycao
May 18, 2022, 7:53am
11
Sorry for the confusion, not memory to set. it’s memory setting.
NX memory setting 16G is not in checkMemLimit in TRT source code.
1 Like
Hello, @Amycao
It’s a tensorrt problem, not a custom carrier board problem, and should be fixed in the next release?
Thank you.
Hello, @Amycao
Is there any temporary solution to proceed with development using deepstream sdk until the next release?
Thank you.
Amycao
May 23, 2022, 10:03am
16
Please try libraries in https://pan.baidu.com/s/1cYmI25wi_C69dQbBasJnaA
access code: rtxw
$ sudo cp </path/to/libnvinfer.so.8.2.1> /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.2.1
$ sudo cp </path/to/libnvinfer_builder_resource.so.8.2.1> /usr/lib/aarch64-linux-gnu/libnvinfer_builder_resource.so.8.2.1
2 Likes
Hello, amycao
When using xavier nx 16gb on nvidia xavier nx 8gb devkit carrier board, is there any problem in running deepstream examples?
When you say there were no problems with the nvidia nx board, the memory shows 6856 , so I don’t think you tested the 16gb model, so I ask again.
Thank you.
Hello, @Amycao
Amycao:
next jetpack release
Is the next jetpack release also 5.0.1 developer preview? Or do I have to wait for a later version?
If this is a known issue, is there an official announcement in the documentation provided by Nvidia? If yes, can you give me the link?
Thank you.
Amycao
May 30, 2022, 2:56am
19
The library in the link is built from TensorRT source code for NX 16Gb, for temporary workaround.
1 Like
Amycao
May 30, 2022, 3:23am
20
Jetpack 5.0.1DP includes this change, you can upgrade to this version.
1 Like
system
Closed
June 13, 2022, 3:24am
21
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.