How can I set frame rate and frame size in gstreamer, for Segnet-camera

CSI- Raspberry Pi v2.1 camera.
64gb SD card.
Power Supply 5v 4amp barrel jack

I can run ./imagenet-console and ./imagenet-camera

I waited long enough for TensorRT to generate the cache file for segnet-camera.

When I go to run ./segnet-camera it initially drops frames and the entire system comes to a crawl. No other processes are open. I’m assuming this is due to the large frame size and frame rate.

Is there a way to speed up segnet-camera? I’m assuming I could set Gstreamer settings for the camera in the gstreamer.cpp file and compile it so it will use a smaller frame and lower rate. Has anyone successfully ran the segnet-camera demo?

I have not tried adding a swap file, should this be done for running inference?

Hi,

Would you mind to check this tutorial for Raspberry Pi v2.1 camera first?
[url]https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/[/url]

If the camera behavior is acceptable, please update the corresponding GStreamer command here:
[url]jetson-utils/gstCamera.cpp at 2fb2b9dfd8323f99c22d3e2755b88345abd2f3a8 · dusty-nv/jetson-utils · GitHub

If not, please share your experiment result with us.

Thanks.

I was able to successfully modify the file but I was unable to effect the frame rate to the rendering.

unfortunately the Raspberry Pi V2.1 camera has a minimum frame size of 1280 x 720 with an associated frame rate at 120 fps. I was unable to effect the frame rate from the Rpi. It appears that Segnet has an output of only 12080 x 720? but I was unable to find the display utility that Segnet uses for the rendering.

What I found was adjusting the Gstreamer command in the associate jetson-utils/gstCamera.cpp at 2fb2b9dfd8323f99c22d3e2755b88345abd2f3a8 · dusty-nv/jetson-utils · GitHub

did not effect the frame rate and the segnet/gstcamera cpp code was able to compensate and select the default raw video frame rate from the Rpi. Maybe I didn’t edit the correct lines because even though I chose a smaller frame width and height the gstCamera file would still select “mode 4” from the Rpi settings.

All in all I was unable to speed up the rendered frames but after initializing the TensorRT cache a few more times it did seem to respond much faster after the third initialization.

@AastaLLL, Update: I was able to use a Logitech USB camera which supports alot more frame sizes and frame rates.

I was able to utilize a smaller frame with lower frame rate but I can’t seem to find the correct display file. To adjust the rendered onscreen display.

I noticed segNet seems to output 1240x720 frame size.

Here’s where I’m at. When I run segNet with Input frame size 620 x 360 at 15fps I get the same fps on the render ~1.1

But it also renders two frames. Is this a result of the segNet output? Or a result of display render output? Is this [jetson-utils/glDisplay.cpp at master · dusty-nv/jetson-utils · GitHub]the correct file to adjust if I want to mess with the rendered display in segNet-camera example?

Finally was able to make progress on some changes. First change was use USB Logitect Camera (Tessar 2.0/3.7) worked on boot and has extensive frame options and frame rates

I modified gstCamera.cpp argument string with the video size i wante from the camera (still the network says it started camera with 1280 x 720 need to investigate that.

I modified glDisplay.cpp to not query for screen size and instead set the width and height of the visual window with hard coded values, (going to modify it to argv1 and argv2 and supply the width height when launching segnet command.

While I was reading through the files i noticed that segNet.cpp checks for a argv3 and if it has a valid fcn-alexnet model from the /networks folder it’ll initialize that network.

I checked out the “fcn-alexnet-pascal-voc” I was able to get 7.8-8.0 fps running that network and 640x480 frame size.

Unfortunately changing the camera or the display when running fcn-alexnet-cityscape-hd still only puts out 1.2-1.5 fps

Hi,

Sorry for the late update.

fcn-alexnet-cityscape-hd is segmentation-type model which may be slower due to complexity.
It looks like you already fix the camera issue. Is there any other things we can help?

Thanks.

Hello AastaLLL,

I am facing a similar issue with the Raspberry Pi v2.1 camera attached to the Jetson Nano. I am trying to run the imagenet-camera.py program at 30 fps. It is currently running at around 65 fps. I looked at gstCamera.cpp and the fps is set at 30/1.

ss << “nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)” << mWidth << “, height=(int)” << mHeight << “, format=(string)NV12 ! nvvidconv flip-method=” << flipMethod << " ! "; //‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! ";

While running imagenet-camera.py the GST_ARGUS outputs this:
Running with following settings:
camera index = 0
camera mode = 4
output stream w = 1280 h = 720
seconds to run = 0
frame rate = 120.000005

Why is it running at 120 fps? How do I limit this to 30 fps?

Hi anadig,
You may check a wrong string. The correct string should start with nvarguscamerasrc, not nvcamerasrc.

I am facing the same issue, As you have specified it should start with nvarguscamerasrc. I am using the same but still not able to set framerate. As the code starts it prints

Running with following settings:
camera index = 0
camera mode = 4
output stream w = 1280 h = 720
seconds to run = 0
frame rate = 120.000005

How do I set the lower fps??

Hi,
It is the operating sensor mode. You should check the real framerate through fpsdisplaysink.

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280,height=720,framerate=<b>60/1</b>' ! fpsdisplaysink text-overlay=false videosink=nvoverlaysink -v
$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280,height=720,framerate=<b>30/1</b>' ! fpsdisplaysink text-overlay=false videosink=nvoverlaysink -v