I am aiming to develop a platform for a personnal sensor. In order to do that, I am firstly making a prototype which is causing me some troubles.
My prototype is composed of an evaluation board of the FPGA lattice Crosslink (the Master Link Board) and the Jetson Nano. The FPGA creates a sight which is sent using MIPI CSI-2 to the Jetson Nano.
The sight particularites are :
→ The sent image is in RAW8 format (GRAYSCALE8)
→ The resolution is 320x240
→ The data are sent with 2 MIPI lanes
The main difference is that i started from the imx219 driver and dtsi (I modified it to remove I2C). Also, I added the GRAYSCALE8 format on the Jetson Nano.
Actually, I am trying to receive the datas from the FPGA.
I tried to display the video stream by using GStreamer and especially this command :
you may have a try to save the stream locally and check the content,
for example, $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=320,height=240,pixelformat=GREY --set-ctrl bypass_mode=0 --stream-count=1 --stream-to=test.raw
is there any failure reported?
please execute following to verify the sensor stream.
for example, $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=320,height=240,pixelformat=GREY --set-ctrl bypass_mode=0 --stream-count=300
please have a try, it’ll show < for each success capture frame, and report average frame-rate for a second below the pipeline.
thanks
hello @JerryChang,
Oh ok !
I added the option --stream-mmap.
I get the < for each frame successfully, but I don’t get the average framerate as you can see :
Here is an update, I tried with a high number of --stream-count
We get a framerate after 15,585 < which I do not really understand…
Here is the output of the command (I only put until the first framerate appears)
Hi @JerryChang,
I thought about it during the week end and I have few questions…
As input for our Jetson, we have 240 lines of 320 pixels, each pixel is represented by a value scaling from 0 to 255.
We don’t have a CCCC baye rformat, but only one value (not four) for each pixel.
So how does it work for the Nano, does it need a special format like CCCC as input ?
And is it possible to “fool” the Jetson by configuring the Jetson parameters at 160x120 as if it is a format CCCC or even RGGB ?
this looks incorrect. you may examine the signaling between start-of-frame and end-of-frame.
or,
from the software driver, it’s using sync-point for waiting hardware signal.
please add some debug messages to check nvhost_syncpt_wait_timeout_ext for the frame start.
for example, $L4T_Sources/r32.5/Linux_for_Tegra/source/public/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi2_fops.c
static int tegra_channel_capture_frame_single_thread(
...
chan->capture_state = CAPTURE_GOOD;
for (index = 0; index < valid_ports; index++) {
err = nvhost_syncpt_wait_timeout_ext(chan->vi->ndev,
chan->syncpt[index][0], thresh[index],
chan->timeout, NULL, &ts);