既然nvidia不开放ISP的相关内容,那能不能提供一个参考的YUV camera的驱动程序呢?
我倒是写了max9286(YUV422输出)的驱动,能建立/dev/video0,但是用v4l2-ctl命令时候,出现VIDIOC_G_FMT: failed: No such device错误。我修改过很多地方,仍然是这个错误!
Did you work on r24.2.1? And could you list the command what you were enter.
现在好了,max9286调通了,能显示图像了!但是是5120720(4个摄像头)的显示!我想改成25601440的显示!
You need to modify the V4l2 driver to report this resolution and config the max98286 to output corresponding resolution.
我的问题是不能通过驱动层来改变分辨率,因为美信告诉我max9286的4个摄像头只能是从左到右排列,即5120720,不能输出25601440的图像,所以我打算通过应用程序来实现2560*1440的图像显示和压缩存储!
That supposes like post processing, right? That means you need to implement your own code to reorganize the frame.
是的,需要post processing,关键需要进行H264的压缩存储。有没有相关的参考代码呢?
I not really know how you implement your post processing, current all of the sample code are the MM API use case.
http://developer.nvidia.com/embedded/dlc/l4t-multimedia-api-reference-24-2-1
好的,谢谢了!在这里能够遇到华人朋友,也是非常开心的!Thank you very much!
如果要是将5120720(4个摄像头)的图像转换成25601440并且进行压缩存储,可以使用VIC(VIDEO IMAGE COMPOSITOR),但是关键问题是去哪里能够得到相关的代码呢?MM API应该没有涉及到VIC吧,况且我的图像是YUV格式的!
There’s a sample 07_video_convert for the VIC use case. You can also reference to below topic.
https://devtalk.nvidia.com/default/topic/966011
我看到了这个07的例程,里面确实可以对视频进行分辨率的转换,但是我这里用的是摄像头,需要进行一个实时的传输,并且后端还要送入编码器进行压缩存储!
Yes, that is. Current We don’t have the sample code from the camera pipeline. You need to reference to those sample code to implement yours.
在昨天的北京jetson讨论会上我提问说到了这个问题,然后说一个函数就可以搞定,但是如果要是摄像头再加上编码是不是会很复杂呢?有些迷茫!
Multimedia_API中的10_camera_recording(摄像头视频编码保存)和07_video_convert(视频转换)二者结合似乎可以满足我的要求,但是不知道对于我们进来的YUV 422摄像头视频数据,用二者的融合代码有没有障碍或者难度有多大?
Hi,
We were able to get the Serializer/Deserializer working, so it is possible to use the MAX96705 / MAX9286 combination. You can read more about it here:
-David