Wireless camera interface

Hi Folks,

We are looking for an opencv embedded platform for our (security-centric use case) project, and are evaluating Tx1 for that purpose. Our requirements are such that we will acquire videos from four different cameras. The cameras will be located at 4 distant (upto 20 ft from board) locations and would connect to our embedded platform over WiFi. Four cameras would connect to embedded platform over 4 independent WiFi channels. Each camera would send out a “.mp4” bit stream (1080p @ 30 fps @ 10 mbps), which would need to be decompressed before being processed using opencv APIs for end applications like - uniform detection, and gesture recognition. In this context, I have following questions -

  1. Can 4 independent bit streams be received over 4 independent WiFi channels into Tx1 board ? For e.g. can we use attach external WiFi modules over USB, to get upto 4 bit streams in ?

    a) If so, where can I find right drivers/APIs, to interface with WiFi connection, and park the bit-streams in desired location in memory ?

  2. Drivers APIs to work with Tx1 deocoders ? How to use decoder for multi-context/channel decoding ? Does decoder necessarily comes coupled with display ? If so how to isolate decoder and display, since we are just interested in getting decoded frames for openCV processing.

    a) Where can I read up about decoder frame pixels, their formats (is YUV data in planar or NV12/21 format ?)

  3. Assuming we can run some Open-CV routines CPU and/or GPU, would both of them have access for frame buffers in memory ? do we need to cross few security layers of OS/drivers to get access to frame buffers from the CPU or GPU ?

Thanks,

Hi dumbogeorge,
Here is a thread about running 4 1080p25 transcoding on TX1:
https://devtalk.nvidia.com/default/topic/979908/jetson-tx1/gstreamer-transcoding-performance-issue

But it is still different from your usecase.

Now we have gstreamer frameworks and MM APIs for HW decoding, please refer to the document and check which one is good for your usecase.

https://developer.nvidia.com/embedded/dlc/l4t-documentation-24-2-1

Hi DaneLLL,
Thanks for your response. Yes the transcoding link is not quite our usecase. Though, I find it helpful. At least it validates the perf requirements.

I went through the camera capture section of your gstreamer guide. It appears to me that, bitstream reception over WiFi, and subsequent decode, has probably not been mentioned or encountered in Tx1 community yet. I would like to understand the scope and extent of effort involved in pulling it off. Following is my understanding and open questions, please help.

  1. What would be the best way to start receiving WiFi bitstream and park it in some DDR buffer ? Which APIs to begin with ? Does anything exist ? Has someone used the wifi channel before ?

  2. Alternatively, would it be less effort intensive to use USB camera capture ? Would I use nvgstcapture-1.0 for that ? If so, then next question would be about getting access to the pixel data in DDR from CPU ? Are there sample ‘preview’ application what I can start from to understand how images are captured and accessed ?

Thanks,

dumbogeorge,
A few comments while we are still getting more info for your reference,
. 2 parts from your system design, one is how to get camera data to the system (assume Jetson) and the other how your apps process the 4 streams of incoming data in the Jetson
. first part

  • there is wifi camera, a google can list you some info in this regard
  • there is USB camera as you mentioned
  • the question is transfer rate for these 2 options you need to consider. This relates to your camera capture resolution, the requirement of processing speed considering xfer speed etc.
    . 2nd part of data processing in the Jetson
  • we provide quite a lot of camera related sample code for reference and get developer a head start
  • You can use nvgstcapture-1.0 as Dane indicated. You can download the src code from our embedded portal and check how it uses v4l2src and nvcamerasrc plug-in. The former is standard Linux v4l2 plug-in and the latter takes advantage of Jetson media acceleration. The document Dane listed should help you understand the details on how the data is accessed.
  • Assume you have a Jetson TX1 board, you can use default camera OV5693 as an example to play with nvgstcapture-1.0 sample apps first.

Chijen,
Thanks for the help.

“we provide quite a lot of camera related sample code for reference and get developer a head start”

  1. Is is possible for me to look at the code before hand ? I mean before I buy the board ? Is it open to developer community ?

  2. I tried to google FoV / resolution / fps for OV5693, however I keep getting errored out on OV sebsite. From info I could gather from nvidia and other forums I get a sense that OV5693 cannot do 1080p @ 60 fps. That is our minimum requirement. Could you please give OV5693’s resolution/fps/fov info ?

Thanks,

Hi
You can download all the source from the download link like kernel source code …
Below is the ov5693’s support list.

External Media

dumbogeorge,
You can get nvgstapps source from this link,
https://developer.nvidia.com/embedded/linux-tegra-r2421

For MM API and argus sample, you will need to have a Jetson tx1 board to install JetPack2.3.1.
https://developer.nvidia.com/embedded/jetpack-2_3_1

Hi Nvidia Folks,

Thanks for all the help. I have purchased Tx1 board with two cameras (IMX274-M12 from leopard imaging). I am looking to start with basic test that I can open up the board , boot it, and have its cameras be displayed on HDMI screen. Would appreciate any pointers in that direction.

Thanks,