Jetson unable to handle multiple Realsense Camera streams

Hello everyone,

I am using the Jetson Nano development kit for implementing computer vision on an autonomous robotic platform. I have a setup of multiple RealSense cameras (one for each side of the platform) and am implementing a segmentation model. The project runs successfully when I use my laptop (through it doesn’t have any GPU cores) but fails when I run it on the Jetson. The project is successful only on 1 camera, but when I send a UDP request to get the image from another camera attached to the Jetson, it fails showing a memory access error.

I have taken screenshots of the GPU stats (which show high memory usage), as well the error pasted below. It would be great if I could get suggestions on hardware/memory optimizations I could include so that it runs smoothly on a multi-camera setup using the Jetson.

Error:

Receiving images from camera 11
2
2025-08-21 11:03:37.602821438 [E:onnxruntime:Default, cuda_call.cc:123 CudaCall] CUDA failure 700: an illegal memory access was encountered ; GPU=0 ; hostname=metazet-desktop ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/gpu_data_transfer.cc ; line=65 ; expr=cudaMemcpyAsync(dst_data, src_data, bytes, cudaMemcpyHostToDevice, static_cast<cudaStream_t>(stream.GetHandle()));
2025-08-21 11:03:37.603099485 [E:onnxruntime:Default, cuda_call.cc:123 CudaCall] CUDA failure 700: an illegal memory access was encountered ; GPU=0 ; hostname=metazet-desktop ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=446 ; expr=cudaStreamSynchronize(static_cast<cudaStream_t>(stream_));
Traceback (most recent call last):
File “/home/metazet/path_detection_jetson/software4metazet/run_segmentation_BART.py”, line 1592, in
main()
File “/home/metazet/path_detection_jetson/software4metazet/run_segmentation_BART.py”, line 1584, in main
model.run_deploy()
File “/home/metazet/path_detection_jetson/software4metazet/run_segmentation_BART.py”, line 1196, in run_deploy
interface.handle_udp_requests()
File “/home/metazet/path_detection_jetson/software4metazet/communication.py”, line 173, in handle_udp_requests
result = self.run_row_detection_callback(actual_cam_id)
File “/home/metazet/path_detection_jetson/software4metazet/run_segmentation_BART.py”, line 209, in run_row_detection
result = self.run_inference(bgr_img, depth_img, key)
File “/home/metazet/path_detection_jetson/software4metazet/run_segmentation_BART.py”, line 313, in run_inference
self.run_model()
File “/home/metazet/path_detection_jetson/software4metazet/run_segmentation_BART.py”, line 373, in run_model
output = self.inferencer.process_image(self.bgr_img)
File “/home/metazet/path_detection_jetson/software4metazet/smp_default_model_run.py”, line 66, in process_image
result = self.ort_session.run(
File “/home/metazet/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 273, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : CUDA failure 700: an illegal memory access was encountered ; GPU=0 ; hostname=metazet-desktop ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/gpu_data_transfer.cc ; line=65 ; expr=cudaMemcpyAsync(dst_data, src_data, bytes, cudaMemcpyHostToDevice, static_cast<cudaStream_t>(stream.GetHandle()));
terminate called after throwing an instance of ‘onnxruntime::OnnxRuntimeException’
what(): /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:129 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; SUCCTYPE = cudaError; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; SUCCTYPE = cudaError; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 700: an illegal memory access was encountered ; GPU=0 ; hostname=metazet-desktop ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_allocator.cc ; line=98 ; expr=cudaFreeHost(p);

Aborted (core dumped)

What’s the BSP version? It’s USB or CSI camera?

It is a USB camera, running Jetson Linux BSP 32.7.1.

head -n 1 /etc/nv_tegra_release

R32 (release), REVISION: 7.1, GCID: 29818004, BOARD: t210ref, EABI: aarch64, DATE: Sat Feb 19 17:05:08 UTC 2022

Might bump up to the latest 36.4.4, it is MUCH more stable than previous releases. So far we have been able to use it out of the box for dev.

You’re posting in the Orin Nano forum thread, but R32.7.1 is incompatible with Orin. I’m guessing that this is actually an older original Nano, and not Orin (nor Xavier). You will probably need to clarify which Nano model you are using. If this is actually an older Nano (which is a TX1 small form factor), then someone could move this to the Nano forum here:
https://forums.developer.nvidia.com/c/robotics-edge-computing/jetson-embedded-systems/jetson-nano/76

Based on the screenshot I gave, the model is an NVIDIA Jetson Orin nano Engineering Refrence Developer kit Super - L4T 36.2.2]. I dont understnd why the Tegra is not able to detect the Jetpack, because I can print it in teh command line: head -n 1 /etc/nv_tegra_release.

It looks like this:

Maybe update the firmware of Realsense to verify.

Eveything works when I connect it to the laptop. So does that mean the Realsense firmware is ok ?

Just to add some context I’ll describe some firmware meanings.

Firmware tends to be uploaded into a device. A device tree is firmware, and it loads into the Jetson during boot to describe mainly carrier board layout. That’s just one example.

Firmware to an external device also loads into that device, and it tends to require loading each time the device boots. Firmware can load into the camera based on what the driver is told to load from the Jetson or from the PC. This changes the actual camera behavior. If the firmware content is available on the PC, but not on the Jetson, then firmware would cause failure from the Jetson.

The most common and useful understanding is from Wi-Fi (unrelated, but quite good at illustration). Wi-Fi generates radio signals which are regulated differently in different parts of the world. One could manufacture a Wi-Fi adapter specifically for some region of the world, and then manufacture different hardware for another part of the world. Or, it would be easier to manufacture just one kind of hardware and then load it with the software for the part of the world it is shipping to.

There is a similar comparison if one is updating behavior over time: One could build new hardware to replace old hardware if a bug is found. Or, if the bug is controlled in software (firmware in this case), then there could just be a software update.

There is possibly firmware needed to upload into the camera each time the driver loads. If so, then it might be somewhere in “/lib/firmware/”. This software is normally agnostic of what platform the camera runs on. This means that firmware loading into the camera or other device does not care if the host computer is one particular operating system or architecture versus another; what has to match is the firmware to the camera or device it loads into to change that device to some standard. If it is a Wi-Fi device, then it is almost a guarantee there will be such firmware. In the case of a complicated camera, then this might be the case.

In your second post you said:

It is a USB camera, running Jetson Linux BSP 32.7.1.

head -n 1 /etc/nv_tegra_release

R32 (release), REVISION: 7.1, GCID: 29818004, BOARD: t210ref, EABI: aarch64, DATE: Sat Feb 19 17:05:08 UTC 2022

It is more or less impossible for this to be Orin. Orin cannot run R32.x releases. This is most likely the original Nano and not an Orin. It could possibly be Xavier. However, if firmware is loaded into the camera, then it is unlikely the firmware itself cares if this is R32.x or R36.x. The driver itself will definitely care. The form factor and external look of the Nano is quite similar for original Nano, Xavier Nano, and Orin Nano.

For your Realsense camera you would need to check the downloads from Realsense for that model and see if it uses external firmware. This would be installed to the Jetson and loaded into the camera each time the camera driver loads. It is very possible for your camera to work on one platform, but not another, if the firmware being loaded by the driver is missing or different.

1 Like

My segmentation code runs when I send UDP requests to a single Realsense camera. But the moment I add another camera, it shows teh illegal memory access error. ChatGPT says that a race condition is causing it but I have not been able to solve the issue by implemeting locks. Has anyone else faced such an issue using multiple cameras ?

I am curious about whether or not the host logs information when plugging in the camera. On the host PC, would you monitor “dmesg --follow”, and only then plug in the camera? Share whatever log lines add as a result of the camera plugin. Then do the same thing on the Jetson (just make sure we know which log goes with this platform).

Also, if you run “lsmod”, what modules in that list are for this camera? If the camera has operated (successfully or not), then a module has probably loaded for it. Are they the same modules for this camera on both the Jetson and the other computer? Whatever module(s) load for the camera, on both platforms, try providing the module arguments like this:

sudo -s
cd /sys/module/<module name>/parameters
egrep -i '*' *
exit

Note: You can log the output of the egrep command by appending " 2>&1 | tee /home/<username>/Downloads/log_module.txt". If your user login name is “name”, then this would log at:
/home/name/Downloads/log_module.txt
(it is “sudo” so it would be owned by root; you could also chown of that log to that user while in sudo)

The purpose here is to see not only if the same modules load, but if the arguments are also the same. If they differ, then it might be due to firmware present. The arguments passed might or might not provide that clue.

I am able to succesfully run multiple cameras now. I found out the at the multiple pipelines for different cameras were not being closed properly in the segmentation code. I have now implemented it.