Seeking Advice : Best Approach for Indoor Navigation with Jetson TK1 and ZED Camera

Hello,

I have a Jetson TK1, a ZED 1 stereo camera, and a 1/10 scale RC crawler.

I’ve installed and tested a Pixhawk on the RC crawler and plan to mount the Jetson TK1 and ZED camera onto it.

My goal is to have the crawler navigate autonomously indoors within a known environment—it will only operate in familiar areas and will not explore new locations, and I will not use QR code.

He will navigate through obstacles to a yellow door. If the door is open, he will pass through it, continue straight down the aisle, and then turn left. The elevator will be on the left side.
Starting from the starting point and reaching the elevator is the goal for my toy project.

As a beginner, what is the quickest and easiest way to achieve this? I’m familiar with programmatically controlling the RC crawler using MAVLink, but I’m unsure about indoor navigation.

Should I use deep learning with CUDA and CNN, or should I utilize ROS with SLAM?

I tried using a simple tracking example with the ZED camera, but it was inaccurate due to my old-generation hardware. Therefore, I will not pursue that approach.

I am considering starting with deep learning because it seems that ROS might present compatibility issues and could offer limited learning opportunities (mainly involving connecting libraries rather than in-depth learning, and it consumes very long time). If this is incorrect, please let me know. Does anyone have any recommendations?

You can check GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. tp see if can find some reference, but the TK1 is a bit too old, many samples can’t work on it.

Oh, thank you.

I want advice for the old platform…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.