I can't run deepstream-lidar-inference-app on jetson nano. It will report an error!

I can’t run deepstream-lidar-inference-app on jetson nano. It will report an error, but I can reason when I run the image_client that comes with tritonserver.
tritonserver runs on the local jetson orin
My environment is:

deepstream-app version 6.2.0
DeepStreamSDK 6.2.0
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.4
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3

The model generated by build_engine.sh in deepstream-lidar-ference-app that I use
The error reported is:

I0922 01:46:54.894148 13666 pinned_memory_manager.cc:240] Pinned memory pool is created at ‘0x204c80000’ with size 268435456
I0922 01:46:54.894532 13666 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0922 01:46:54.921109 13666 model_lifecycle.cc:459] loading: pointpillars:1
I0922 01:46:55.000913 13666 tensorrt.cc:64] TRITONBACKEND_Initialize: tensorrt
I0922 01:46:55.000978 13666 tensorrt.cc:74] Triton TRITONBACKEND API version: 1.11
I0922 01:46:55.001000 13666 tensorrt.cc:80] ‘tensorrt’ TRITONBACKEND API version: 1.11
I0922 01:46:55.001011 13666 tensorrt.cc:104] backend configuration:
{“cmdline”:{“auto-complete-config”:“true”,“min-compute-capability”:“5.300000”,“backend-directory”:“/opt/tritonserver/backends”,“default-max-batch-size”:“4”}}
I0922 01:46:55.002111 13666 tensorrt.cc:211] TRITONBACKEND_ModelInitialize: pointpillars (version 1)
I0922 01:46:55.712098 13666 logging.cc:49] [MemUsageChange] Init CUDA: CPU +213, GPU +0, now: CPU 242, GPU 5795 (MiB)
I0922 01:46:55.918412 13666 logging.cc:49] Loaded engine size: 5 MiB
W0922 01:46:55.923472 13666 logging.cc:46] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
I0922 01:46:57.800932 13666 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +822, now: CPU 808, GPU 6652 (MiB)
I0922 01:46:58.062737 13666 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +86, GPU +143, now: CPU 894, GPU 6795 (MiB)
I0922 01:46:58.067462 13666 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +5, now: CPU 0, GPU 5 (MiB)
W0922 01:46:58.067570 13666 model_state.cc:520] The specified dimensions in model config for pointpillars hints that batching is unavailable
I0922 01:46:58.070762 13666 tensorrt.cc:260] TRITONBACKEND_ModelInstanceInitialize: pointpillars_0 (GPU device 0)
I0922 01:46:58.073305 13666 logging.cc:49] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 881, GPU 6795 (MiB)
I0922 01:46:58.076809 13666 logging.cc:49] Loaded engine size: 5 MiB
W0922 01:46:58.077075 13666 logging.cc:46] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
I0922 01:46:58.084250 13666 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 894, GPU 6795 (MiB)
I0922 01:46:58.085541 13666 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 894, GPU 6795 (MiB)
I0922 01:46:58.088111 13666 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +5, now: CPU 0, GPU 5 (MiB)
I0922 01:46:58.091079 13666 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +4, now: CPU 883, GPU 6799 (MiB)
I0922 01:46:58.092560 13666 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 883, GPU 6799 (MiB)
I0922 01:46:58.436340 13666 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +403, now: CPU 0, GPU 408 (MiB)
Segmentation fault (core dumped)

Could you use the gdb tool for preliminary analyze the stack information of this crash?

How should I use gdb tool? I haven’t used it before, it seems a bit difficult.
I noticed that the help documentation for helplidar mentioned: Jetson dose not support to start Tritonserver locally, please start Tritonserver on DGPU. But I seem to be able to run Tritonserver for other models.

did you modify the code and configuration file? can the app run successfully without any modification? what is your start commandline?

The configuration file that comes with the system has not been modified. When I try to use triton for a classification task, it works, but the lidar task reports the above error.
Does triton-server support lidar detection when running locally on jetson? is that so?

Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

Are you using engine files generated on the running machine?

Yes, used it. Run engine.sh, and then tao-convert automatically converts the engine model

./tritonserver --model-repository=/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-lidar-inference-app/tritonserver/models

run:
sudo ./deepstream-lidar-inference-app -c configs/config_lidar_source_triton_render.yaml
report it :

/opt/nvidia/deepstream/deepstream-6.2/sources/libs/ds3d/gst/custom_lib_factory.h:58, INFO: Library Opened Successfully
/opt/nvidia/deepstream/deepstream-6.2/sources/libs/ds3d/gst/custom_lib_factory.h:68, INFO: Custom Context created from createLidarFileLoader
INFO: LidarFileSource dataloader is starting
/opt/nvidia/deepstream/deepstream-6.2/sources/libs/ds3d/gst/custom_lib_factory.h:58, INFO: Library Opened Successfully
/opt/nvidia/deepstream/deepstream-6.2/sources/libs/ds3d/gst/custom_lib_factory.h:68, INFO: Custom Context created from createLidarDataRender
INFO: gl3d pointcloud datarender is starting
INFO: gl3d datarender is starting
INFO: Library Opened Successfully
INFO: Custom Context created from createLidarInferenceFilter
INFO: lidarinference datafilter is starting
INFO: modelInputs name:points, dataType:FP32, ndataType:0, numDims:3, numElements:819200
INFO: modelInputs name:num_points, dataType:INT32, ndataType:3, numDims:1, numElements:1
INFO: customPreprocessLibPath:/opt/nvidia/deepstream/deepstream/lib/libnvds_lidar_custom_preprocess_impl.so
INFO: memPoolSize: 2
INFO: gpuid: 0
INFO: filterInputDatamapKey: DS3D::LidarXYZI
INFO: inputTensorMemType: 1
INFO: customPreprocessLibHandle: 0xaaaad9369420
INFO: customPreprocessFuncName: CreateInferServerCustomPreprocess
INFO: nvinferserverCfg: triton_mode_CAPI.txt
INFO: configPath: configs/config_lidar_source_triton_render.yaml
INFO: new nvinferserverCfg: configs/triton_mode_CAPI.txt
INFO: get preprocess callback suc
I0926 08:56:51.074733 191787 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x204c80000' with size 67108864
I0926 08:56:51.075033 191787 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0926 08:56:51.079534 191787 server.cc:563]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0926 08:56:51.079590 191787 server.cc:590]
+---------+------+--------+
| Backend | Path | Config |
+---------+------+--------+
+---------+------+--------+

I0926 08:56:51.079616 191787 server.cc:633]
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+

I0926 08:56:51.079748 191787 tritonserver.cc:2264]
+----------------------------------+------------------------------------------+
| Option                           | Value                                    |
+----------------------------------+------------------------------------------+
| server_id                        | triton                                   |
| server_version                   | 2.30.0                                   |
| server_extensions                | classification sequence model_repository |
|                                  |  model_repository(unload_dependents) sch |
|                                  | edule_policy model_configuration system_ |
|                                  | shared_memory cuda_shared_memory binary_ |
|                                  | tensor_data statistics trace logging     |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.2/so |
|                                  | urces/apps/sample_apps/deepstream-lidar- |
|                                  | inference-app/tritonserver/models        |
| model_control_mode               | MODE_EXPLICIT                            |
| strict_model_config              | 1                                        |
| rate_limit                       | OFF                                      |
| pinned_memory_pool_byte_size     | 67108864                                 |
| cuda_memory_pool_byte_size{0}    | 67108864                                 |
| response_cache_byte_size         | 0                                        |
| min_supported_compute_capability | 5.3                                      |
| strict_readiness                 | 1                                        |
| exit_timeout                     | 30                                       |
+----------------------------------+------------------------------------------+

I0926 08:56:51.081677 191787 model_lifecycle.cc:459] loading: pointpillars:1
I0926 08:56:51.139639 191787 tensorrt.cc:64] TRITONBACKEND_Initialize: tensorrt
I0926 08:56:51.139721 191787 tensorrt.cc:74] Triton TRITONBACKEND API version: 1.11
I0926 08:56:51.139740 191787 tensorrt.cc:80] 'tensorrt' TRITONBACKEND API version: 1.11
I0926 08:56:51.139753 191787 tensorrt.cc:104] backend configuration:
{"cmdline":{"auto-complete-config":"false","min-compute-capability":"5.300000","backend-directory":"/opt/nvidia/deepstream/deepstream-6.2/lib/triton_backends","default-max-batch-size":"4"}}
I0926 08:56:51.140407 191787 tensorrt.cc:211] TRITONBACKEND_ModelInitialize: pointpillars (version 1)
I0926 08:56:51.141890 191787 tensorrt.cc:260] TRITONBACKEND_ModelInstanceInitialize: pointpillars_0 (GPU device 0)
I0926 08:56:51.596598 191787 logging.cc:49] [MemUsageChange] Init CUDA: CPU +214, GPU +0, now: CPU 318, GPU 8566 (MiB)
I0926 08:56:51.730129 191787 logging.cc:49] Loaded engine size: 5 MiB
W0926 08:56:51.732242 191787 logging.cc:46] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
I0926 08:56:52.864609 191787 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +504, now: CPU 884, GPU 9100 (MiB)
I0926 08:56:53.055426 191787 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +86, GPU +83, now: CPU 970, GPU 9183 (MiB)
I0926 08:56:53.058223 191787 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +5, now: CPU 0, GPU 5 (MiB)
I0926 08:56:53.059397 191787 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 958, GPU 9183 (MiB)
I0926 08:56:53.061334 191787 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 958, GPU 9183 (MiB)
I0926 08:56:53.157798 191787 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +403, now: CPU 0, GPU 408 (MiB)

This log did not show any crashes. Does that run ok?

A black window pops up and then recedes. No inference display of lidar targets detects.
The last line of the log is: Segmentation fault (core dumped)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can use gdb by referring to the topic gdb to print the stack of the crash.

Could you share your whole configuration step by step for this demo?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.