detectnet-"video file", rather than a .jpg or camera how do I test a file?

Hi,

my first post to the forum, just started using the Jetson and I’m fumbling my way through it. It comes with some great examples. One of the things I’ve been playing with is the detectnet-camera sample. What I would like to do is instead of using the on-board camera, I would like to select a video file to run through it.

I went back and looked at the source file and have some questions.

84 * create the camera device
85 /
86 gstCamera
camera = gstCamera::Create(DEFAULT_CAMERA);

so this is obviously where we enable the camera on the board. Would it be as simple of redirecting this towards the video file?

Not quite, but what you would need to do would be to modify the gstCamera source to replace the gstreamer pipeline that uses nvcamerasrc with a pipeline that decodes a video from disk. See the L4T Accelerated GStreamer User Guide for example pipelines that work with video files.

Here is where the gstreamer pipeline is built and configured in the code, that would need changed over from nvcamerasrc element to the filesrc/h264parse/omxh264dec elements for example.

I have posted a solution in this thread: Play video with Jetson-inference · Issue #104 · dusty-nv/jetson-inference · GitHub