RTX LiDAR: CUDA Memory Error when configuring many ranges

Hi everyone,

I observe undesired behaviors, when trying to define separate ranges in the RTX LiDAR config JSON file.
The syntax in the config is like this:

"rangeCount": 6,
"ranges": [
            {
                "min": 0.5,
                "max": 40.0
            },
            more ranges
        ]

I map these ranges to the scanning-directions of my solid-state LiDAR.

The minimum-(not-)working-example consists of a plane-mesh in 100m distance from the RTX LiDAR, the latter one being placed in the /World origin. The minimum-sensor has exactly one scanning-direction for each defined range.

Increasing the rangeCount and introducing more entries in the ranges-list leads on my system to different behaviors in IsaacSim:

  1. "rangeCount":6 → No problem, the LiDAR detects surfaces. They can be visualized in rviz2.
  2. "rangeCount":9 → The LiDAR does not detect surfaces anymore, at least the point clouds published by the ROS2 bridge does not contain any points.
  3. "rangeCount":12 IsaacSim instantly crashes upon clicking the “play” button to start the simulation.

The crash comes with the error:

2024-04-12 12:57:29 [44,632ms] [Error] [omni.sensors.nv.lidar.lidar_core.plugin] CUDA error 2 in ../../../include/omni/sensors/cuda/CudaHelperMem.h:51:out of memory (cudaMallocHost(ptr, numElems * sizeof(T)))

I attached minimum-working-example files. It is one USDA file per case (1,2 and 3) and one matching config JSON file.

My system spec:

  • Ubuntu 20.04 LTS
  • IsaacSim 2023.1.1-rc.8+2023.1.688.573e0291.tc.linux-x86_64.release
  • ROS2 Foxy
  • Virtual Machine on an Nvidia OVX
Fri Apr 12 15:31:17 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.154.05             Driver Version: 535.154.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A40-48Q                 On  | 00000000:02:00.0 Off |                  Off |
| N/A   N/A    P8              N/A /  N/A |      0MiB / 49152MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

image

The expected and desired behavior would be that the user can specify an arbitrary number of ranges, in order to assign a min-max-range-pair to each scanning-direction of the scanning pattern.

Thanks for your help,

Markus
minimum_working_example.zip (10.3 KB)