Rpi v2 CSI camera freezes with Jetson Nano 2GB

Hi there,
I have a freshly flashed nano 2GB with a genuine rpi camera V2. Swap is looking good, 5085. I’m going through the training notebooks and I’ve noticed some issues with the camera when I train any model. The hello camera notebook works fine for me, nothing to say there. Then I do most of the classification notebook and when the training is done, the camera freezes and no amount of kernel killing and restarting will help. Sometimes exiting the container and running it again solves the issue but most often it doesn’t. If I don’t reboot, every single time I try to start the camera stream again I get :
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in init(self, *args, **kwargs)
23 if not re:
—> 24 raise RuntimeError(‘Could not read image from camera.’)
25 except:

RuntimeError: Could not read image from camera.

During handling of the above exception, another exception occurred:

RuntimeError Traceback (most recent call last)
in
6
7 # for CSI Camera (Raspberry Pi Camera Module V2), uncomment the following line
----> 8 camera = CSICamera(width=224, height=224, capture_device=0) # confirm the capture_device number
9
10 camera.running = True

/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in init(self, *args, **kwargs)
25 except:
26 raise RuntimeError(
—> 27 ‘Could not initialize camera. Please see error trace.’)
28
29 atexit.register(self.cap.release)

RuntimeError: Could not initialize camera. Please see error trace.

Saving the model doesn’t seem to work and I can only go as far as the training step every single time. Same for regression or other classification models. In fact, with the regression model training, the image freezes right at the beginning but I’m still able to click on the image to add body part locations to the model.

I have a webcam but it’s on the older side and when I plug it in it kills the internet (ethernet) for it so not an option right now. I re-flashed the SSD quite a few times now and it hasn’t solved the issue.

Thank you

Julie

2 Likes

hello juliefaurelacroix,

could you please refer to developer guide,
please check Approaches for Validating and Testing the V4L2 Driver to exclude other process and check the camera basic functionality.
thanks

Right, so I saw this generic answer to other posts, and the first section mentions a link that points to an unresponsive page: on Android, it crashes Chrome and on PC, it just loads forever. Within what I can see of that page, I’m not finding any “Multimedia User Guide” as suggested.

Then there’s the API section, which also contains links to pages that are either unresponsive or simply don’t even try to load.

Not sure what more I can do at this point. Again, there’s a stream. I can see it, the hello camera works well. Then, when the training begins, after having captured 60 images with the CSI camera, everything basically hangs and I have to reboot the jetson.

Hi @juliefaurelacroix, we will investigate this issue further, but in the meantime could you try launching the DLI container with these additional options on your docker run command line:

--memory=500m --memory-swap=3G

You can try experimenting with these values to see if it improves the behavior. It would be helpful for us to know how it acts on your setup. Thanks!

Thank you for your reply. Setting the memory and swap values does seem like it could help indeed but so far I’m not getting the results. The stream lasts a little longer at the beginning of the training step but doesn’t come back after. I set it to --memory=1G --memory-swap=4G just to see what would happen and now that the training is done, free -m gives me 1688 used mem and 2050 used swap -give or take- and the stream hasn’t come back. The live prediction gave a couple of predictions right after training despite the camera feed being frozen but it’s dead now.

Edit: uhhhhhh so… it magically fixed itself?
ok so I’ve flashed a bigger SD card and set the swap to 8G and… it’s weird.
Setting mem and swap doesn’t seem to have any effect on the actual memory and swap used. My first attempt at this new setup, I didn’t set anything. Didn’t work. Second attempt, I did 500M + 8G. Didn’t work. Every time, free -h -s 3 got me in average mem 1.7G, swap 1.7G, so hardly a need for 8G unless it peaks and crashes within the span of 3s. Which, I mean it could happen.

Then… then I thought you know what, it’s 2020, it’s 7pm, I’ve been inside for over 6 months now so the next logical thing is trying the same thing and see if I get a different result. And I … did?

It works. I don’t know what kind of sorcery this is but I rebooted, did the exact same steps and now it works. I even deleted the whole borking dataset and rebooted again because I told myself you can’t let this possessed jetson win. Still works. I have no idea. What.

1 Like

Alright so here’s the solution I found

tl;dr

  1. Flash a large SD card
  2. Make sure your camera works
  3. Set your swap to anything 8GB or larger
  4. Set the resources for the container when you run it.

The long version

  1. Flash a large SD card

Because we’re going to use at least 8GB for swapping and potentially more in the future, I went for a 128GB SD card. The larger, the better is my guess.

  1. Make sure your camera works

Yeaah, that one sort of puzzled me because of the whole unresponsive page situation but what I figured out is that I could tell that the camera wasn’t working just by running this:

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink

if what you are getting looks like:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: Running with following settings:
   Camera index = 0
   Camera mode  = 2
   Output Stream W = 1920 H = 1080
   seconds to Run    = 0
   Frame Rate = 29.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.

you’re good. If you’re getting any sort of error, try reseating your camera and make sure that the cable is facing the right way. If along the way, your camera stops responding and you get an error when running this line of code, the only solution I can offer is to reboot.

  1. Set your swap to anything 8GB or larger

Now, the instructions sort of differ from my experience and here’s what I did. I set up the Jetson with a screen and a keyboard like normal, and I selected the default swap size. I’m assuming you could choose not to create a swap file and then do the next steps but using the default options worked for me. I ended up with a swap of 4GB, give or take, which could be verified by running:

free -h 

the -h stands for “human readable” and it’s just to make it easier to figure out how much we’ve got without having to convert from kibibi or mebibi or whatever bibi option they chose (don’t come at me, I find it hilarious to call them bibi’s and there’s nothing you can do to change my mind).

Then I basically followed the steps in the tutorial.

sudo systemctl disable nvzramconfig
sudo fallocate -l 4G /mnt/4GB.swap
sudo chmod 600 /mnt/4GB.swap
sudo mkswap /mnt/4GB.swap

I went for 4GB here because 1. I already had 4GB and figured 8GB would be enough and 2. I’m lazy. Then:

mkdir -p ~/nvdli-data

sudo vim /etc/fstab

then type "i", navigate to the end of the file with your keyboard arrows and type at the end of the file :

/mnt/4GB.swap swap swap defaults 0 0

then ctrl-c and “:wq” to write and quit. That will append that line to the file that determines how much more resources are allocated.

We’re now at 4GB as defined when we first set up the jetson + 4GB that we just added. I haven’t checked whether it made a difference to create just one big swap file but I’m assuming that since it’s fairly small and we’re not looking at stellar performances anyway, it might not be worth the hassle. I found that 8GB total might be a minimum here.

  1. Set the resources for the container when you run it

For some weird reason that I can’t explain right now, the container runs better if you set what it can use when running it, even though I haven’t seen a difference in actual usage with or without the arguments. I’m assuming there’s some process that can’t be bumped to swapping because it’s slower but note that setting swap to anything less than 8GB didn’t work for me even though it only uses about 1.7GB of swap. Very strange.

Now, if you’ve done everything using the GUI, with a screen and a keyboard plugged directly into the Jetson, you’ll want to unplug all that and ssh into your machine. Running a window manager (what you see on the screen) will hog up all of your precious 2GB of RAM and it’s not required. So the final line to run is

sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --volume /tmp/argus_socket:/tmp/argus_socket --memory=500M --memory-swap=8G --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4

I set the version to v2.0.1-r32.4.4 so you can just copy-paste it as of November 2020 but be careful with the version, people of the future! How did 2020 end by the way? Hope it didn’t get any worse because wow, what a roller coaster.

And there you have it! You can play around with the tutorials but remember to set the camera to CSI camera in the notebooks or it won’t work.

Phew!

8 Likes

Increasing the memory does not work for me.
Same issue for a friend, he bought the Logitech camera and it worked.

Hi there, what camera are you using right now?

I am ‘in-between’ cameras right now.
CSI Raspberry Pi v2 camera is not working for me. (Freezes after Thumbs training)
Just bought a Logitech C270 camera as recommended. Will try it over the holidays.

Tried increasing memory to 10GB (8GB SWAP + 2 on board) … no avail.

hm hm and just so we cover all the bases, you did add
--memory=500M --memory-swap=8G
to the docker run command?
I’m also getting a Logitech camera, looks like it might be the simplest solution…
I’m curious to know if it fixes it for you!

2 Likes

Good suggestion. I will try it tonight. :)

Thanks Julie. It works.

You are awesome. thanks again.

Happy to help!

1 Like

Save yourself 2 days of debugging…following the standard instruction will lead you into CSI camera issue due to lack of memory. 4/5GB swap won’t make it either. I have created 10GB swap and problem solved with this command.

Jetson Nano - Use More Memory! - YouTube ← add swap

sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --volume /tmp/argus_socket:/tmp/argus_socket --memory=500M --memory-swap=8G --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4

1 Like

Glad it worked for you too! I think something needs to be fixed in the container because it crashes even though the RAM and swap aren’t full but since it’s a tutorial it might not be super worth it for users to try to fix it. @dusty_nv might want to let us know in this thread whenever it is fixed so I can mention it in the solution and people will be able to pull the updated version.

1 Like

Thanks for posting this fix.

It worked for me, but only after I had switched to an sd-card with a fast read/write.
(sandisk extreme pro)

2 Likes

Oh yeah! Similar to Raspberry Pi, you’ll want a fast SD card, good catch!

UHS-1 should be a minimum. I’ll edit the solution accordingly, so people don’t make that mistake!

Weird - looks like I can’t edit it anymore. Oh well, here’s a link to a page that lists some SD cards that have been tested, in case some people wonder: Best MicroSD Card for NVIDIA Jetson Nano - Accessories Tested

1 Like

Hi Dusty, did you get a chance to look into this? I am having the same issue with the Rpi V2 camera.
I’ve increased my swap file to 12G.
I’ve added " --memory=500m --memory-swap=8G" to the shell command to run docker.
I’ve followed Danas’ video tutorials to the letter.
I’m using the same exact same SD card as in the tutorial.
I’m positive the camera ribbon cable is well seated and secure as I’ve manipulated it with the camera live and it causes no issues.
Everything works perfectly until the training phase when the camera starts to glitch and then locks.
The Jupyter notebook throws 2 errors:

22:49:23

Exception in thread Thread-4:
Traceback (most recent call last):
File “/usr/lib/python3.6/threading.py”, line 916, in _bootstrap_inner
self.run()
File “/usr/lib/python3.6/threading.py”, line 864, in run
self._target(*self._args, **self._kwargs)
File “/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/camera.py”, line 34, in _capture_frames
self.value = self._read()
File “/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py”, line 40, in _read
raise RuntimeError(‘Could not read image from camera’)
RuntimeError: Could not read image from camera

22:49:38

/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:59: UserWarning: This overload of nonzero is deprecated:
nonzero(Tensor input, *, Tensor out)
Consider using one of the following signatures instead:
nonzero(Tensor input, *, bool as_tuple) (Triggered internally at …/torch/csrc/utils/python_arg_parser.cpp:766.)

The first is part way thru the training the second at the end.

Testing the camera after it has locked gives:

mix@mix-desktop:~$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:656 Failed to create CaptureSession

(gst-launch-1.0:9406): GStreamer-CRITICAL **: 01:40:15.999: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed
Got EOS from element “pipeline0”.
Execution ended after 0:00:00.058950719
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Rebooting the nano is the only way to get the camera to work again.

Apologies for the long message but I’m quite frustrated that the demo tutorial doesn’t work on the kit as designed.

Any advice would be appreciated.

Thanks Gerard Spillane.

Hi @mickeyslightworks , can you also try running this command and rebooting:

$ sudo systemctl set-default multi-user.target

This will prevent the X-server from starting, which in my experience saves a nice chunk of memory. Even if you have the display disconnected, the X-server can still start unless you shut it down like this.
Here are the commands to disable/re-enable it: xorg - How to disable GUI on boot in 18.04 (Bionic Beaver)? - Ask Ubuntu

1 Like