Welcome to the Jetson Projects forum!

Candidate cameras below. Thanks!

1st candidate: Lynred Atom 640, https://www.lynred-usa.com/media/products/atom-640/atom640-lynred-usa.pdf

2nd candidate: FLIR Tau 640, https://www.flir.com/products/tau-2/

3rd candidate: Baumer TXG04c, Baumer TXG04 Camera, Machine Vision | Phase 1 Technology

Remember that I am not a “camera guy”, so I am hoping others will comment, but here are some random notes…

Lynred Atom 640:

  • Says there is a “graphical user interface”. Not sure what that is, but there is no mention of drivers. The control is USB serial or “camera link”, but video is missing information.
  • They list a telephone number, so I would suggest just calling them on the telephone and tell them you want to know about Linux drivers. Some details are the 5.x series of Linux kernel (for Orin or Xavier), on Ubuntu 20.04. They also mention an SDK, which is rather nice to have, but if it is designed for Windows (and I think it is), then it won’t help a lot on Linux (you could convert, but that might not be trivial). Any driver or binary (if closed source or not adapted to Linux) is to run on 64-bit ARM (arm64/aarch64).
  • I don’t think this includes a Linux driver. You’d have to ask them.

Tau 2:

  • Does not mention drivers. Same thing, you will have to call and ask them. Check if it has open source video drivers for Linux 5.x series kernels. If not, then it must have binary drivers for 64-bit ARM (arm64/aarch64).
  • It does mention analog video or digital. The digital uses LVDS. This almost always requires custom hardware and device tree. This might also be the same on the previous camera. Note that control is separate from camera video data in pretty much every case. So using USB for control usually means a simple serial data, but not necessarily some sort of standard software. Control over serial data does not require any kind of driver most of the time for the USB or UART, but it does require software which knows how to use that. If you are a programmer, then serial is the easiest thing to implement. You just set the port correctly, and then read or write to the device special file like an ordinary file. This is also true for the previous camera. You will need to call them and ask about 64-bit ARM and Linux.

Baumer TXG04c:

  • Uses gigabit ethernet. There will be no trouble with that in terms of connection (versus LVDS, which requires a driver and custom work). It does still use a driver (likely user space) software. Possibly, if this is in some way “standard”, it might directly use something like gstreamer. I think gigabit could fairly easily keep up with the small resolution and advertised 56 frames per second.
  • Note that drivers are usually for the raw data acquisition. At some point a user space application is what uses that data. Ethernet, as a data pipe, runs out of the box on most every Linux, including Jetsons. If the data itself is in some way “standardized”, then likely there are libraries and other software already on Linux which can handle this. Otherwise, you’d need to develop custom video applications on your own, or at least an adapter to convert to some standard format.

Of all of these, I’m guessing the Baumer TXG04c is the best choice, but in reality, I think all will require talking to the manufacturer/distributor to find out about any issues for Linux on 64-bit ARM. The first two cameras need the manufacturer to answer this, the latter camera has made it possible to not need an ethernet driver for physical connection (it is all a question about using the camera data and setting up camera control other than that…be sure to ask each vendor distinctly about (A) video data, and separately about (B) control software on Linux).

1 Like

Dear Nvidia Team,

I wanted to express my gratitude for your prompt response. The link you provided has been very helpful.

In addition to the projects you’ve shared, I would greatly appreciate it if you could also provide me with references to projects specifically related to Deep Reinforcement Learning. If possible, please do the needful.

Thank you once again for your assistance.

Best regards, Samer I.

JetCar, the mini self-driving car project


The JetCar is a 3D printed car around the Jetson Nano development board from NVIDIA with minimal additional electronics for driving. Just using the camera stream as input, it can not only follow street markings but also automatically turn left or right at intersections where allowed. Through machine learning it recognizes direction arrows, stop texts and stop lines on the street. The model architecture is a U-Net. It creates very visual class images that are processed in the firmware, which is written in Python controlled by a Jupyter notebook. The user connects to the car from a host computer via WiFi and the operator simply requests direction changes for the next intersection. But it only turns, if the direction is not restricted by a direction arrow on the street.

The project includes the mechanical design, electronics design, firmware and tools for data preparation, model training and street map generation. The documentation describes all parts in detail. All source codes and binaries are made available in GitHub.(GitHub - StefansAI/JetCar: The mini self-driving car project).

The documentation is meant to help anyone to build this car at home, to try it out and to tinker with it.

Videos:
JetCar Part 1 - Introduction: https://youtu.be/Dyagu1U4WaQ
JetCar Part 2 – Assembly: https://youtu.be/SRJtpAVrnXM
JetCar Part 3 - Firmware Setup: https://youtu.be/EU15R7dB3os
JetCar Part 4 - Data Preparation: https://youtu.be/G4PKwO-Vvck
JetCar Part 5 - Model Training: https://youtu.be/1_ItpaHLQUw
JetCar Part 6 - StreetMaker: https://youtu.be/k-taHFnwKoY
JetCar Part 7 - Firmware: https://youtu.be/uBuOR0Mm2eY
JetCar Part 8 - Demonstration: https://youtu.be/s89fhRwG_2A

1 Like

Greeting Jetson Community,

We are senior students at University of California, Davis and we would like to share our machine learning senior desgin project, NavNScan, with the Jetson community. As the name suggest, our design project is about a RC car navigating and interacting with its surrounding. To be more specific, the RC is a Traxxas RC car paired with a Jetson TX2, a Intel D435i RGB depth camera, and a generic gps receiver. On the software side, we are running yolov8 for detecting objects, google maps API for navigation, and Adafruit servo kit for rc contols. The project aims to enable the RC car to navigate to given GPS coordinates, avoiding obstacles, and counts objects along the way. For more infomation, please check out our github. Also, we would like to Thank our professor Chuah and TA Kartik for helping and making this project successful. Thank you for your time.

Sincerely, Team NavNScan