Links to Jetson Nano Resources & Wiki

I have been using a USB camera with my Nano for the past 6 months. Recently I tried a e-CAM30A_CUMI0330 camera with more pixels hoping to improve my recognition work. However, they sent a simm card to use with the camera but I want to keep my work and the simm I’m using. Does NVIDIA have a link to camera recommendations that includes requirements/setup? I can’t get any info from the vendor on how to run the camera without using the kernel provided on their simm.

Hi pellico, the IMX219 MIPI CSI sensor has the driver already built into JetPack-L4T, so it works out of the box with the Raspberry Pi Camera Module v2. IMX219-based cameras support video resolutions up to 1920x1080 @ 30FPS, is that sufficient for your needs?

Otherwise, you might want to see https://elinux.org/Jetson_Nano#Cameras and check with the camera vendors about modular driver support.

Hello,
I’m looking for documentations, guide and examples (preferably python) to understand how create UFF file from tensorflow networks. I have already read the guide [url]Sample Support Guide :: NVIDIA Deep Learning TensorRT Documentation
But I’d like to know how to correctly write the file config.py for object detection and classification networks. Are there rules to use or guides and tutorials that can explain how to do it?

Thanks in advance,
Chiara

Hi Chiara, check out these GitHub repo’s which show converting TensorFlow model to UFF and loading with TensorRT:

I thought that was the case that “MX219 MIPI CSI sensor has the driver already built into JetPack-L4T” but having issues. when I run detectnet-camera, I get successful initialized video device (width 1280, height 720 depth 24(bpp). However, at the end of the info stream it says gstreamer transitioning pipeline to GST_STATE_PLAYING, gstreamer failed to set pipeline state to PLAYING(error 0), failed to open camera for streaming.
When I run ls /dev/video0 → cannot access ‘/dev/video0’.
I thought that this indicated I did not have the drivers installed?

Are you able to run nvgstcapture-1.0? Can you post the output of this v4l2-ctl command below?

$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext

Also, which IMX219 camera are you using? The version from Leopard Imaging might need their driver. If you use the Raspberry Pi Camera Module v2, that should work out of the box. If you can’t connect to it, the ribbon cable may be reversed in the camera connector on the devkit, so try flipping it around or making sure it is seated the whole way. You can also check kernel dmesg output to look for errors.

So when I run nvgstcapture - I get the following message block at the end:
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 100, Level = 40
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:521 No cameras available

I have installed v4l-utils successfully - no updates needed.
When I run v4l2-ctl - I get:
Failed to open /dev/video0: No such file or directory
The camera I’m trying to use is e-com_CUNANO 3.4MP. This camera is suppose to be for use with the Nano.
I probably should have bought the Pi camera!

Ah, ok - you would probably need to check with e-con about what driver is needed for that camera.

Hi,

It’s a pretty foolish question, but I have to ask it.

When Jetson Nano was launched, I had seen - most probably in a webinar, a nice introductory video. The video started from unpacking the box to programming the Nano to a collision-avoidance system. I had saved the link. But now the link shows that the webinar is no longer available.

Can somebody help me to get the video. I am buying a Jetson Nano, and that seemed at that time the best video to start with.

Hi bumeshrai, you can view the recordings of the Hello AI World and JetBot webinars through the links found here:

https://devtalk.nvidia.com/default/topic/1049835/jetson-nano/nvidia-webinars-mdash-hello-ai-world-and-learn-with-jetbot/

Hi Pellico,

Yes, It is possible to retain the work you have on your microSD card, and upgrade the e-con binaries alone to get e-CAM30_CUNANO 3.4MP camera working. The detailed steps to acheive the same is provided in the e-con documentation.

The microSD card provided by e-con will contain both the Release package and documents. Plug the microSD card into a linux PC, and navigate to "home/nvidia/Release" folder in the mounted SD card. Then extract the Documents_R01.zip, and follow the instructions in the "Upgrade Procedure" section of e-CAM30_CUNANO_Developer_Guide_<REV>.pdf document.

[quote=“dusty_nv”]

Hi bumeshrai, you can view the recordings of the Hello AI World and JetBot webinars through the links found here:

[quote]

I had tried that link, but I couldn’t get the particular webinar.

Hi buneshrai, I am able to resolve both the links on that post - do you mean that once you get to the webinar page and register to view it on-demand, you aren’t able to view it? Which webinar are you trying to view?

Hi dusty, thanks for all the help. I am able to register to the webinar, unfortunately I am not able to locate the video there!!

Hi bumeshrai, which of the webinars are you having trouble viewing? The Hello AI World webinar, or the JetBot webinar? Thanks.

I am able to view all webinars. No problem in that! Problem is in locating/finding the webinar which I remember seeing last year.

Aha I see, gotcha! If this was last year, it was one of our previous webinars - they are all archived here:

https://developer.nvidia.com/embedded/learn/tutorials#webinars

Thanks got it!!!

It was “AI for Makers — Learn with JetBot”

Hi,

Is there any tutorial out there to flash and then individually use/deploy Jetson Nano module?

Is there a recommendation on a good embedded systems book that would match well for developing on the Jetson Nano? Maybe an embedded systems book for ARM that is recommended?