FrameSource IP CAM for VisionWorks

Hello everyone,
Is there a way to use VisionWorks algorithms with an IP camera(with rtsp protocol) as input?

The documentation does not mention how to interface:

Documentation NVXIO APIs

FrameSource

NVXIO APIs

Detailed Description

This class is intended for reading images from different sources.

The source can be:

  • Single image (PNG, JPEG, JPG, BMP, TIFF).
  • Sequence of images (PNG, JPEG, JPG, BMP, TIFF).
  • Video file.
  • Video for Linux-compatible cameras.
  • NVIDIA GStreamer Camera on NVIDIA® Jetson™ Embedded platforms running L4T R24.

Note

  • GStreamer-based pipeline is used for video decoding on Linux platforms. The support level of video formats depends on the set of installed GStreamer plugins.
  • GStreamer-based pipeline with NVIDIA hardware-accelerated codecs is used on NVIDIA Vibrante Linux platform only (V3L, V4L).
  • Pure NVIDIA hardware-accelerated decoding of H.264 elementary video streams is used on NVIDIA Vibrante Linux platforms (V3L, V4L).
  • OpenCV (FFmpeg back-end)-based pipeline is used for video decoding on Windows.
  • On Vibrante Linux, an active X session is required for FrameSource , because it uses EGL as an interop API.
  • Image decoding, image sequence decoding, and Video4Linux-compatible camera support require either OpenCV or GStreamer.

I’m using OpenCV to take frames from the IP CAM and convert them to vx_image but I have problems with frame refresh and references inside the vx_delay object.
Thanks for help.
Sa

Hi,

Sorry that we don’t add the IP camera support in the VisionWorks.

But since the camera source implementation is open-sourced.
You can add the support on your own:

/usr/share/visionworks/sources/nvxio/src/NVX/FrameSource

Thanks.

Ok Thanks for answer!