I have an already written program that displays frames in the MAT format. I wish to use these for tracking features instead of giving a video file location to the feature tracking algorithm. Is there any way I can pass the MAT images as input so that the algorithm uses them to obtain and track features?
You may have a look to these 2 examples (one in C++, one in python):
@Honey_Patouceul Thank you for the reply. But my issue is not with getting OpenCV to work, it’s just the particular problem I mentioned of how exactly to modify the feature tracking sample given (what code to insert/change), such that it takes MAT frames as input.
Sorry, but I fail to understand your case. You may explain further.
The two samples I’ve posted are using opencv mat frames for tracking, not about getting opencv working.
Glad you’re familiar and got opencv working, but please tell your use case and expectation. Also tell if using C++ or Python.
Not sure I can help further, but you may get better advice explaining in more details.
@Honey_Patouceul I’m using C++. I’ll explain my problem in detail.
The sample code in main_feature_tracker.cpp takes a string source location as input, and creates a frame source using it ->
std::string sourceUri = app.findSampleFilePath(argv);
I would like to modify this or the surrounding code such that I can run a loop and send it frames of the video in the form of MAT images as the input, instead of the current string file location.
Now I understand you’re referring to VisionWorks.
I have no experience with VisionWorks. Someone else would better advise.
You should post the question in the VisionWorks forum, not the Jetson Nano forum, if you’re using VisionWorks.