I’m using C++, NVIDIA VisionWorks Sample for Feature Tracking, and OpenCV 4.1.1
The sample code in main_feature_tracker.cpp takes a string source location as input, and creates a frame source using it →
…
std::string sourceUri = app.findSampleFilePath(argv[1]);
…
std::unique_ptrovxio::FrameSource source(
ovxio::createDefaultFrameSource(context, sourceUri));
…
I would like to modify this or the surrounding code such that I can run a loop and send it frames of the video in the form of MAT images as the input, instead of the current string file location.
Hi,
Image input is by default supported by the VisionWorks.
./nvx_demo_feature_tracker [options]
Command Line Options
This topic provides a list of supported options and the values they consume.
-s, --source
- Parameter: [inputUri]
- Description: Specifies the input URI. Accepted parameters include a video, an image, an image sequence (in .png, >.jpg, .jpeg, .bmp, or .tiff format), or camera.
- Usage:
--source=/path/to/video.avi
for video
--source=/path/to/image.png
for image
--source=/path/to/image_%04d_sequence.png
for image sequence
--source="device:///v4l2?index=0"
for the first V4L2 camera
--source="device:///v4l2?index=1"
for the second V4L2 camera.
--source="device:///nvcamera?index=0"
for the GStreamer NVIDIA camera (Jetson TX1 only).
Please download our document here for more information:
Thanks.
Thank you for replying. The issue is that the frames I wish to use aren’t actually physically present as images on the hard drive, so I can’t give a folder location that contains them.
What I want to be able to do is perform some operations on the frames obtained from a video and then pass those frames one by one to the feature tracker.