Training Custom Model on Ubuntu Desktop and later transferring model to NVIDIA Jetson Nano/NX for Inference

OVERVIEW: Hello–to start, I have successfully performed object detection on the NVIDIA Jetson Nano and NX. Both training and model deploying was done on the Jetson device. This was done by following the tutorials of dusty-nv from GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. That said, training takes a long time—and as such, I want to first train a model on a desktop (Ubuntu 22.04.1)—and then once training is done on the Ubuntu Desktop, just copy and past the created model onto a NVIDIA Jetson Nano/NX device for model inference.

ISSUE: My issue has to do with accessing my webcam, specifically when using video-viewer /dev/video0 from within docker. I am using the HD Pro C920 Webcam. A similar issue was found here detectnet works in headless only · Issue #1489 · dusty-nv/jetson-inference · GitHub Thus far (in broad strokes), I have done the following:

  1. Installed the necessary NVIDIA Drivers (GeForce RTX 3060)
  2. Installed PyTorch via website (Start Locally | PyTorch) and tested it works.
  3. Installed docker and executed sudo docker/run.sh script, and it appears to run without issue (I have done this on the NVIDIA Jetson Nano and NX before, so I had an idea of what to expect).

The next step is to test the webcam is working. This is what I have tried:

  1. When not in the docker, when I run ‘cheese’ to verify my camera works, the video feed from my HD Pro Webcam C920 opens without issue…
  2. However, when run sudo docker/run.sh, and then run video-viewer /dev/video0, I can see my v4l2 device: HD Pro Webcam C920 is found (and the webcam glows ‘blue’ signifying it is being activated)—but within a few seconds, I get a ‘Segmentation fault (core dump)’, and webcam turns off.

Still within docker (sudo docker/run.sh), I tried a few things:

  1. When I run the below command, the camera feed pops up without hesitation (in docker): gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, width=640, height=480, framerate=30/1’ ! autovideosink

  2. When I run the below command (just autovideosink added on to the original command), I see from the terminal output that frames are being generated (in docker): video-viewer /dev/video0 autovideosink

Is there some step I am missing to get video-viewer /dev/video0 to work within the docker when running on a Ubuntu 22.04.1 Desktop?

Hi,

Just want to confirm first.
Do you need the camera when training in a dGPU environment?
In general, the training data is saved as images rather than runtime camera input.

Thanks.

Hi,

More, could you please test if the command works without using the container?

Thanks.

Hi @Alma11, it’s a beta feature right now running the jetson-inference container on x86, and as you have seen from that GitHub issue there appear to be some kinks to still iron out on the desktop OpenGL side of things for x86 (although strangely it displays fine here with C920 webcam on my laptop using that container). My code is using CUDA<->OpenGL interoperability which is probably why cheese still works okay for you.

Regardless, you should still be able to run train_ssd.py on your x86 system from within the container using your dataset (that also works fine here). Then you can export it to ONNX and copy the ONNX over to your Jetson. If you are still annotating data, you might want to use a tool like CVAT to do that instead.

Your right. Technically no, I do not need the camera as you say for training. There are just a lot of steps to get right–and I wanted to make sure I achieved each step before advancing to narrow in on any issues that came up. And further—if the day came I did need a webcam for running real time detection/inference from within the docker in Ubuntu 22.04.1 (instead of the Jetson device), I would need the camera looking to collect life frames on the trained model. Thanks for the suggestion though.

I went ahead an tried video-viewer /dev/video0 command outside the docker container, but it was not found (I executed the command from within ~/jetson-inference/docker). I was thinking this would be the case because I thought video-viewer was only ‘viewable’ from within the container (meaning when docker/run.sh script had be executed prior).

Dusty, thank you for the writeup (and also for all the incredible help through your tutorials, online blogs you have answered, including mine once a twice).

I can understand that, thanks for being explicit on the issue so I can ‘move forward’ with increased confidence (by the way, I tried both Ubuntu 20.04.5 LTS and 22.04.1 LTS–multiple times—both produce the same error).

That is good to hear it works on your laptop—wonder if it is a component in my desktop perhaps (it’s a built PC—so perhaps a compatibility with motherboard USB input the camera is plugged into…but then why would it work with cheese). I don’t have a second camera at the moment, but when I do I will look into trying it (different brands, etc).

Knowing that the ‘camera failure’ most likely won’t affect training (train_ssd.py) is great (and in the end, would be a complete win in my book).

As for annotation data—thanks for the suggestion. My thought was to take new images from a phone, raspberry pi camera, etc—and create new annotation data for training in Ubuntu. In the past I have tried Labelimg, but sounds like you suggest CVAT–I will look into that. Would both work? I assume that once images and associated annotation data is generated—it needs to be in the identical folder structure as seen on the Jetson Nano/NX (including I believe, a .txt file that lists all the photo image filenames and their associated annotation files).

I will look into all of this and reply back to this with time in the event it helps someone else down the road—thank you again for the direction.

Hi @Alma11, you are correct - on x86, jetson-inference only supports the Docker container, so you aren’t going to find/build the binaries outside of the container. The reasoning for this is that on x86, there can be many different host OS’s with different package managers and so supporting all those different system configurations can be problematic (whereas within Docker it’s a controlled environment). The x86 container also works on Windows under WSL2.

One thing I forgot to mention is to use the dev branch of jetson-inference if you want to try GUI/graphics on x86. There are some fixes in there that have not yet been merged into master. Also, you can try running video-viewer in headless mode (--headless) and saving it to a video file just to confirm that you’re able to capture frames from your PC.

Yes, LabelImg will work too (and was the offline annotation tool that I used before I discovered CVAT). The only thing is that LabelImg doesn’t organize the files for you in the folder structure of Pascal VOC - it just gives you a bunch of XML files. You will also need to make the ImageSets files and labels.txt for LabelImg. But other than that, it works too.

Also, if you happen to have a laptop with Optimus (integrated GPU + NVIDIA discrete GPU), see this recent post about making sure that your NVIDIA adapter is selected first: https://github.com/dusty-nv/jetson-inference/issues/1218#issuecomment-1299210029

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.