Realsense D435 with Jetson TX2: depth stop working/freezes after time

Hi guys!

I use a RealSense d435 with my Jetson TX2. After working perfectly, the depth always stop working/ freezes after time. No problem with the RGB.

My code run at around 13fps; and I set the pipeline to 60 fps. I thought this was the problem. So I tried setting the pipeline to 6fps. This resulted indeed in my code running at 6fps. But still the same problem… even though, it comes later ( after 5min, sometimes, 10min, compared to 1min - 2 min).

The guy that will solve this claim is brilliant! I’m lost…

Here is my code:

# Libraries imports
import pyrealsense2 as rs  # for realsense camera
import numpy as np
import cv2 as cv

import time  # needded to calculate FPS

class Camera:
    '''
    Class that contains basic Camera management functions without using threading
    '''

    def __init__(self, src=0):
        # Create a pipeline
        self.pipeline = rs.pipeline()

        # Create a config and configure the pipeline to stream
        #  different resolutions of color and depth streams
        self.config = rs.config()
        self.config.enable_stream(rs.stream.depth, 640, 360, rs.format.z16,
                                  60)  # other possible valus for the fps are 6, 15,...
        self.config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8,
                                  60)  # other possible valus for the fps are 6, 15,...

        # Start streaming
        self.profile = self.pipeline.start(self.config)

        # Getting the depth sensor's depth scale:
        self.depth_sensor = self.profile.get_device().first_depth_sensor()
        self.depth_scale = self.depth_sensor.get_depth_scale()
        print("Depth Scale is: ", self.depth_scale)

        # Create an align object
        # rs.align allows us to perform alignment of depth frames to others frames
        # The "align_to" is the stream type to which we plan to align depth frames.
        self.align_to = rs.stream.color
        self.align = rs.align(self.align_to)

    def grab(self):
        # Get frameset of color and depth
        frames = self.pipeline.wait_for_frames()

        # frames.get_depth_frame() is a 640x360 depth image
        # Align the depth frame to color frame
        aligned_frames = self.align.process(frames)

        # Get aligned frames
        aligned_depth_frame = aligned_frames.get_depth_frame()  # aligned_depth_frame is a 640x480 depth image
        color_frame = aligned_frames.get_color_frame()

        # Create the array that will countains the frame.
        # Channels will be:  BGRD
        image = np.ones((480, 640, 4))
        image[:, :, :3] = np.asanyarray(color_frame.get_data())  # Store the BGR image in the array image
        image[:, :, 3] = np.asanyarray(
            aligned_depth_frame.get_data()) * self.depth_scale  # store the depth (in meters) in the fourth channels

        return image

    def read(self):  # used for an easy conversion from the CameraThreading class to this one
        return self.grab()

    def readRGB(self, image):  # Isolate the BGR canals and put it in the right format for it to be understood by openCV
        return image[:, :, :3].astype(np.uint8)

    def showDepthGreen(self, image):
        depth = np.ones((480, 640, 3))
        depth[:, :, 1] = image[:, :, 3] * 100

        return depth.astype(np.uint8)

    def stop(self):  # Close pipeline
        self.pipeline.stop()

# If run of the file, test the frame grabbing and calculate performance
if __name__ == "__main__":
    camera = Camera(0)

    time.sleep(1)

    while (cv.waitKey(1) & 0xff != ord('q')):  # Loop as q is not pressed
        start = time.time()  # start clock measurer
        image = camera.grab()  # grab An Image
        cv.imshow("frames", camera.readRGB(image))  # Show the RGB image
        cv.imshow("depth", camera.showDepthGreen(image))  # Show the RGB image

        # Write performances:
        print("Took: " + str(time.time() - start) + " s")
        print('FPS: ' + str(1 / (time.time() - start)))

    # Once q is pressed, close the camera and destroy the windows:
    camera.stop()
    cv.destroyAllWindows()

Thank you!!
Quentin

Hi,
We have patch for RealSense D435. Please apply it and try.
https://elinux.org/L4T_Jetson/r32.2.1_patch

Hi!

Thank you for your response!

I before used a jetpack3.2, I updated it to jetpack4.2.2 and I installed realsense that way:

  1. sudo apt-key adv --keyserver keys.gnupg.net --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:8$

  2. sudo add-apt-repository “deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo bionic main” -u

  3. sudo apt-get install librealsense2-utils

  4. sudo apt-get install librealsense2-dev

  5. sudo apt-get install librealsense2-dbg

  6. git clone GitHub - IntelRealSense/librealsense: Intel® RealSense™ SDK

  7. cd librealsense

  8. mkdir build

  9. cd build

  10. cmake …/ -DBUILD_PYTHON_BINDINGS:bool=true

  11. make -j4

  12. sudo make install

  13. cd /usr/local/lib

  14. cp librealsense2.so pyrealsense2.cpython-36m-aarch64-linux-gnu.so /home/nvidia/python_venv/venv1/lib/python3.6/site-packages/

It works now! I don’t have the problem anymore! But the performances are not as good as before. I can run the program at 7-8fps instead of 13-15 fps.

Do you know what is the cause? Was It what you meant when you told me to apply the patch?

Thank you!

Hi,
Please check
[Xavier]Patch for RealSense D435
https://devtalk.nvidia.com/default/topic/1048804/jetson-agx-xavier/realsense-camera-unstable/post/5364499/#5364499

Hi,

So I tried the patch even though I run on a TX2, not a Xavier.

The differences founded are that:
there is no 3610000.xhci in the sys/class/tegra-firmware directory but a 3530000.xhci file
and the most i can put the rate to is 204000000

I don’t find any difference in my program speed. Still running at 7-8 fps…

Note: -I didn’t change the python environment nor did i reinstall the librealsense. Should i?
-I didn’t use this patch: [url]https://devtalk.nvidia.com/default/topic/1048804/jetson-agx-xavier/realsense-camera-unstable/post/5327802/#5327802[/url] cause I run on r32.2.1

Thanks for your time!

I just tried this command: sudo nvpmodel -m 0

Performances are better, I can reach 11-12fps, but still not 14-15.

Can you also please check that I installed librealsense and pyrealsense2 the right way? I am not sure at all… ( see my comment here: [url]https://devtalk.nvidia.com/default/topic/1065522/jetson-tx2/realsense-d435-with-jetson-tx2-depth-stop-working-freezes-after-time/post/5396531/#5396531[/url])

Thank you!

Hi,
Please confirm the falcon clock frequency is configured to 408MHz. And also execute ‘sudo jetson_clocks’

3. Increase falcon clock freq
    1. sudo su
    2. cd /sys/kernel/debug/bpmp/debug/clk/xusb_falcon
    3. echo 1 > state
    4. echo 408000000 > rate
    5. cat rate    ---------> To make sure the rate is 408000000

Hi,

Thank you for you message @DaneLLL.

the maximum value I can set my rate at is 204000000. Is it normal?

Otherwise, these steps doesn’t improve the performances unfortunatly…

Any clue?

Thank you!

Hi quentin.boulanger,
After confirming with core teams, 408Mhz is for Xavier only. On TX2, there is limitation and 204MHz is the stable rate. So maximum performance on TX2 is ‘sudo nvpmodel -m 0’ + ‘sudo jetson_clocks’

By reflashing jetPack 4.2.2, installing your patch, and then installing realSense as in my comment above, I fixed the problem.

I know run my code at 40fps.

Thank you @DaneLLL!