Jetson Nano Orin, Issue when create the camera object

I am able to SSH into jetson nano orin board. Also completed the Headless setup and was able to log into the JupyterLab server.

Running into issue when trying to execute the following block of code in “csi_camera.ipynb”:

from jetcam.csi_camera import CSICamera
camera = CSICamera(width=224, height=224, capture_device=0) # confirm the capture_device number


RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in init(self, *args, **kwargs)
23 if not re:
—> 24 raise RuntimeError(‘Could not read image from camera.’)
25 except:

RuntimeError: Could not read image from camera.

During handling of the above exception, another exception occurred:

RuntimeError Traceback (most recent call last)
in
2
3 camera = CSICamera(width=224, height=224, capture_device=0) # confirm the capture_device number
----> 4 camera = CSICamera(width=3200, height=2464, capture_device=0) # confirm the capture_device number

/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in init(self, *args, **kwargs)
25 except:
26 raise RuntimeError(
—> 27 ‘Could not initialize camera. Please see error trace.’)
28
29 atexit.register(self.cap.release)

RuntimeError: Could not initialize camera. Please see error trace.


I tried this tutorial https://www.youtube.com/watch?v=EuRXAUU61yM and was able to get the camera working using jetsonhacks code in their repository

I have a CSI Raspberry Pi v2 camera.

Appreciate any help you can give me with this. Thanks

And this is what I used to launch the docker:

# create a reusable script
echo "sudo docker run --runtime nvidia -it --rm --network host \
    --volume ~/nvdli-data:/nvdli-nano/data \
    --volume /tmp/argus_socket:/tmp/argus_socket \
    --device /dev/video0 \
    nvcr.io/nvidia/dli/dli-nano-ai:v2.0.2-r32.7.1" > docker_dli_run.sh

# make the script executable
chmod +x docker_dli_run.sh

# run the script
./docker_dli_run.sh

Did you follow below link?

No I didn’t. I can give it a try tomorrow morning at work. Does it mean the jatcam package had to be installed and I missed it somehow? Thanks

I went through the instruction and was able to install setup.py from jetcam.

but I am still getting the same error when running the command.

this line importing the library works

from jetcam.csi_camera import CSICamera

the second line gives me the same error as I had copied in the beginning of the thread.

camera = CSICamera(width=224, height=224, capture_device=0) # confirm the capture_device number

Do you try the csi_camera.ipynb ?

Yes, getting the same failure. I am pretty sure nothing hardware wise is broken because as I mentioned in my first post, camera works following the youtube tutorial. So this has to do something with the right package being installed or the way I am calling the function, or …

I am sorry, made a mistake. I was able to run this Jupyter notebook directly in Jetson environment (having a monitor, mouse, keyboard attached to it) and it works.

The issue is I get the runtime errors when running the “hello_camera/csi_camera.ipynb” from Jupyter notebook with docker running.

Could you reference to check if camera able to run in docker.

Unfortunately it didn’t work

!ls -ltrh /dev/video*

gives me:

crw-rw---- 1 root video 81, 0 Aug 23 15:22 /dev/video0

and I still get the runtime error I posted earlier.

Also I connected a USB camera and getting the same error there. So it doesn’t seem to be just a CSI camera issue.

Also output of “v4l2-ctl --list-devices”

NVIDIA Tegra Video Input Device (platform:tegra-camrtc-ca):
/dev/media0

vi-output, imx219 9-0010 (platform:tegra-capture-vi:1):
/dev/video0

A4tech FHD 1080P PC Camera: A4t (usb-3610000.xhci-2.4):
/dev/video1
/dev/video2
/dev/media1

@navidkhajouei sorry did not realize this earlier since this topic is in the Jetson Nano forum, but your title indicates you are using Jetson Orin Nano (moved this topic to Orin Nano forum). That DLI course and it’s container are built for the original Jetson Nano (and JetPack 4), not Orin Nano and JetPack 5. Containers built for JetPack 4 are not compatible with JetPack 5.

It sounds like you already have those DLI notebooks running ok outside of container, and that explains why. If you really want them in container, you can use nvcr.io/nvidia/l4t-ml:r35.2.1-py3 and install jetcam in them like here:

Alternatively, the Hello AI World tutorial has been updated for Orin and JetPack 5:

Seems like I still have no luck with this setup.

I got on the path of “Running the Docker Container” from https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md

I SSH’d to the Nano Orin and when I launched the container using the commands below, it took it a while to install/load bunch of things , but the model downloads dialog box didn’t show up for me (as I see in https://www.youtube.com/watch?v=QXIwdsyK7Rw&list=PL5B692fm6--uQRRDTPsJDp4o0xbzkoyf8&index=10)
$ git clone --recursive --depth=1 GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
$ cd jetson-inference
$ docker/run.sh

Everything seemed to be normal and then I tried running one of examples
$ cd build/aarch64/bin
$ ./video-viewer /dev/video0

And this is the error I am getting:

./video-viewer: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/tegra/libgstreamer-1.0.so.0: file too short

Any suggestion?

Hi @navidkhajouei, I had updated jetson-inference earlier this year (after that video was made) to automatically perform on-demand model downloading, so that downloader tool isn’t needed anymore and it’s normal not for it to show up now.

Thanks for letting me know about this - are you on JetPack 5.1.2 / L4T R35.4.1 by chance? (you can check this with cat /etc/nv_tegra_release)

When reproducing your issue, I found that if you ran the binaries installed under /usr/local/bin in the container, they load without issue (e.g. running video-viewer from any directory in the container as opposed to ./video-viewer from /jetson-inference/build/aarch64/bin)

I tracked it down due to a GStreamer incompatibility that was fixed by rebuilding the container specifically for L4T R35.4.1 - sorry about that. I’ve updated jetson-inference with commit 44c7661 - can you try doing a git pull and doing docker/run.sh again? It should then pull/run the updated dustynv/jetson-inference:r35.4.1 container image instead (not getting the error anymore)

Thanks.

Ran: cat /etc/nv_tegra_release and below is what I got

# R35 (release), REVISION: 4.1, GCID: 33958178, BOARD: t186ref, EABI: aarch64, DATE: Tue Aug 1 19:57:35 UTC 2023

Then “git pull” and ran the docker. Below you can see a screenshot of what I got.

So Unfortunately no luck yet!

Ok, progress! 👍 from your screenshot, it looks like your camera is actually at /dev/video1 - can you try running video-viewer with that instead?

Thanks so much. So I think it’s working now. On both MIPI and USB cameras.

Two questions:

  • How do I stream the video on jetson board display?

  • If I want to stream the video on host with the headless setup, do I have any other option other than gstreamer? I tried but I am not allowed to install gstreamer on my corp laptop. And VLC Player somehow didn’t work for me either.

Thanks!

Try setup RTSP service on Jetson and get the preview by VLC on host.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.