I am working on embedded application using Tegra TX1 (using JetPack 3.1). I have Avermedia CM313B (Mini PCI-e HW Encode Frame Grabber with 3G-SDI) that grab frames from SDI camera (HD, 30fps). There are some drivers included with CM313B such that camera appears as /dev/videoX and has support for V4L2. Driver can outputs YV12 or MPEG format.
My goal is to process the video using VisionWorks (stabilization + tracking) and then stream video over Ethernet. I am stuck at the beginning; how to efficiently transfer video to GPU such that I can use VisionWorks. Do you have any suggestions, which approach to use?
Some additional notes: When running VisionWorks examples on TX1 I got Segmentation fault (nvx_demo_video_stabilizer --source=“device:///v4l2?index=0”), but that is working OK on TK1 (but very slow, just a few fps). However, I can use GStreamer to capture and display video with zero latency (gst-launch-1.0 v4l2src device=/dev/video0 ! xvimagesink). Streaming also works with 30fps using GStreamer (gst-launch-1.0 v4l2src device=/dev/video0 ! decodebin ! videoconvert ! omxh264enc ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! rtph264pay mtu=1400 ! udpsink host=X.X.X.X port=1234), however there are some difference in latency between TX1 and TK1 (on TK1 is very low latency while on TX1 is approx 1 sec latency; any idea why?).