Hello!
We are try prepare IP-camera at Jetson Nano board.
I download Tegra_Multimedia_API_R32.2.0_aarch64.tbz2 file and upack it, it contained several examples at samples directory. Very good examples, but …
Unfortunately all examples based at Argus API, it based at gstreamer, but we are can’t use gstreamer because it have very poor performance, so I try use V4L2 API at capture from sensor and live555 at rtsp stream.
According to 10_camera_recording sample I should set output and capture plane via setupPlane(), according to NvV4l2ElementPlane.h this call ‘encapsulates all the method calls required to set up the plane for streaming. Calls reqbuf internally. Then, for each of the buffers, calls #queryBuffer, #exportBuffer and maps the buffer/allocates the buffer memory depending on the memory type.’. And I can’t pass file descriptor to call because this is private field and class doesn’t have methods for this. So I don’t understand how can capture frames from camera via native V4L2 API and pass it to NvVideoEncoder and after this get encoded frames.
Can you explain this? How can link NvVideoEncoder with V4L2 and live555?
Thank you and excuse my bad english.
Hi,
Please share information about your camera:
$ v4l2-ctl -d /dev/video1 --list-formats-ext
Not sure but IP cameras should be a rtsp source and you should need to decode the stream, not to encode it.
Hello, DaneLLL, thank you for your reply!
Unfortunately I can’t access to system at this moment, so I develop at my host and use cross-compile.
As I know we are use RaspberryPi cmos module and early my colleague check it with samples from tegra_multimedia pack.
Hi brilliantov,
Any progress? Is this still an issue?
$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'RG10'
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3280x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3280x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Hi,
Your camera is a Bayer sensor. It should connect to CSI ports of Nano. For your usecase, you may refer to 10_camera_recording and integrate it with live555.
Going through V4L2, you will get raw buffers and need to implement de-bayering to get YUV420.
But my question is - how should I inject data to NvVideoEncoder and get result from it if I don’t use libargus?
Hi,
The supported format of encoder is YUV420. You have to convert camera frames to YUV420.
Hello, DaneLLL!
Can you share this sample if you have it? Will be the best if this code will work at GPU instead CPU or if customer can switch target environment CPU <–> GPU.
Thank you.
Hi,
We don’t have the sample. Other users may share their experiences.
You may also consider to use Bayer sensor modules from our partners.
The modules leverage hardware ISP for de-bayering.