DetectNet then save processed video to disk

Hi all

What is the best approach to store a real time processed video (from a usb camera) which has been processed by detectnet and then store the output video (including bounding boxes etc) to disk?

Are there any examples?

Hi @scott104, I recommend taking a look at this page:

This lists all the I/O options available for video - to save the output to disk, you can do it like so:

detectnet /dev/video0 my_detections.mp4

It will still be rendered to your display if you have one attached, in addition to being recorded to the video file. All of those commands on that page are interchangeable between imagenet/detectnet/segnet/ect.

1 Like

Absolutely perfect thanks

I just realised you are dusty from the videos

Excellent resources by the by, I had been watching earlier this week and it got me started

I will continue to watch

I couldn’t get the docker working for some reason so I just compiled on the Orin nano

Is it easy to process multiple video streams ? I guess I just fire up a few detectnets with different video sources ?

That’s great @scott104!, glad that you found the videos useful 😊

To process multiple video streams, yes you could just run multiple instances of detectnet/, or you could make your own script that has multiple videoSource/videoOutput interfaces and uses one detectNet model (or multiple models if you need them). Technically the later will be more efficient for deploying an application.

Hi Dusty

A quick question - it seems that if i pass in arguments like --input-width=xx and --input-height=xx then the stream shows ok but it doesn’t save the file anymore to disk…

I am not sure why, maybe i am missing something… ?

It seems if i pass any of input arguments ive tried to “ --model-dashcamnet…” then the video isn’t actually saved to disk anymore…

Hi @scott104, can you provide me the command-line you are running and the console log from when you run it?

Are you running it like --model=dashcamnet --input-width=xx --input-height=yy /dev/video0 my_file.mp4 ?

1 Like

hi dusty
i dont know what on earth i was doing wrong, but its now working after copying over your format… maybe i missed something.


@scott104 due to the ad-hoc nature of the jetson-inference command-line parsing, the optional arguments (like --model=xyz, --input-width=N, ect) should use the = and not have the value separated by spaces (otherwise they can be interpreted as positional arguments). And with the Python versions, the position arguments should come after the other optional ones.

1 Like

ok got it thanks…

one last question, i cannot find any up to date references for streaming 2 or more cameras simultaneously and displaying in a grid fashion side by side…

I tried this but get multiple errors

Implementing Real-Time, Multi-Camera Pipelines with NVIDIA Jetson | NVIDIA Technical Blog

another additional questions :)

I edited detectnet.cpp in /jetson-inference/examples/detectnet and then rebuilt the project in the jetson-inference/build folder using cmake … and make commands…

but when i re-run detectnet it doesnt seem to change?

For ow im just editing the “help” to see if my changes are visible

I managed to edit it successfully, it will now open and run dashcamnet on 2 cameras…

However, I am not able to save both streams to disk - is there a special approach to saving both streams to disk?

@scott104 do you want to save both cameras to the same video file, or independent video files? And do you want to save the incoming unprocessed video, or the post-processed video with the detection bounding boxes?

If you want to save both cameras side-by-side to one video file, on your composted image you can just use one videoOutput (basically as it is today in where you specify my_output_video.mp4 on the command-line or hardcode it)

If you want to save independent video files, then you will need a videoOutput interface for each of your videoSource interfaces, each having been opened with different output filenames. This week I added the ability to pass in a videoOptions dict to each interface which makes it easier to programmatically instantiate multiple interfaces without dealing with the command-line:

1 Like


Well, I have currently tried to save to a single output file like you suggest, but this just saved the video from camera one , not camera two

I was creating two input files and two output files…:

Are you saying if I only have 1 output file in my detectnet.cpp it will put the videos side by side ?

As I was stuck…. I then added a second “output” file name to detectnet when I call it on the command line (like… myvideo1.mp4 myvideo2.mp4

This seems to save two files, of the same size. But only the first one (from camera one) will actually open…: the other is a .mp4 file but it has no properties and won’t open

You would need to compost the videos side-by-side first (before passing them to videoOutput.Render()). You can use the cudaOverlay() function for that:

For an example of that, see the segnet or depthnet samples (those compost two images side-by-side before rendering)

detectnet as-is isn’t setup to output multiple independent video files simultaneously, but you could try adding it by creating multiple videoOutput interfaces in the code. I’m actually not sure how it’s creating that second myvideo2.mp4…

1 Like

Oh right! Thanks

I didn’t know about compost , I will check that out…

If I wanted to save two seperate files to disk, is that not currently possible then?

It’s kind of half working with detectnet but the second file doesn’t seem to be properly finalised… as such…

My eventual goal is to overlay a HUD with values that update frame by frame…

Can I use CUDA for that? Or can I not update the overlays on the fly during a video stream?

It should be possible, you would just need to create two independent videoOutput instances. The example detectnet code is purposely kept simple and is for one stream, so you would need to modify it.

Sure yes, I have some basic shape drawing functions in CUDA here:
Also you can see how to draw text using cudaFont in the imagenet/ examples

Thanks I will have another go tomorrow and explore CUDA …

For example if I want to update “speed” if it was a vehicle and have it as an overlay…

I could use the cudaOverlay to overlay a static background to the speedo as a png, then update the speed text by calling cudaOverlayText and somehow pulling in the real time speed info from my sensor ?

Does it sound logical ?

Yep, I think you have the right idea there 👍 good luck!

me again sorry…

whats the appraoch to modifying these lines to accept the video stream?

// load the input images
if( !loadImage(“my_image_a.jpg”, &imgInputA, &dimsA.x, &dimsA.y) )
return false;

if( !loadImage(“my_image_b.jpg”, &imgInputB, &dimsB.x, &dimsB.y) )
return false;