We are looking for an opencv embedded platform for our (security-centric use case) project, and are evaluating Tx1 for that purpose. Our requirements are such that we will acquire videos from four different cameras. The cameras will be located at 4 distant (upto 20 ft from board) locations and would connect to our embedded platform over WiFi. Four cameras would connect to embedded platform over 4 independent WiFi channels. Each camera would send out a “.mp4” bit stream (1080p @ 30 fps @ 10 mbps), which would need to be decompressed before being processed using opencv APIs for end applications like - uniform detection, and gesture recognition. In this context, I have following questions -
Can 4 independent bit streams be received over 4 independent WiFi channels into Tx1 board ? For e.g. can we use attach external WiFi modules over USB, to get upto 4 bit streams in ?
a) If so, where can I find right drivers/APIs, to interface with WiFi connection, and park the bit-streams in desired location in memory ?
Drivers APIs to work with Tx1 deocoders ? How to use decoder for multi-context/channel decoding ? Does decoder necessarily comes coupled with display ? If so how to isolate decoder and display, since we are just interested in getting decoded frames for openCV processing.
a) Where can I read up about decoder frame pixels, their formats (is YUV data in planar or NV12/21 format ?)
Assuming we can run some Open-CV routines CPU and/or GPU, would both of them have access for frame buffers in memory ? do we need to cross few security layers of OS/drivers to get access to frame buffers from the CPU or GPU ?