DetectNet then save processed video to disk

Hi all

What is the best approach to store a real time processed video (from a usb camera) which has been processed by detectnet and then store the output video (including bounding boxes etc) to disk?

Are there any examples?

Hi @scott104, I recommend taking a look at this page: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md

This lists all the I/O options available for video - to save the output to disk, you can do it like so:

detectnet /dev/video0 my_detections.mp4

It will still be rendered to your display if you have one attached, in addition to being recorded to the video file. All of those commands on that page are interchangeable between imagenet/detectnet/segnet/ect.

1 Like

Absolutely perfect thanks

I just realised you are dusty from the videos

Excellent resources by the by, I had been watching earlier this week and it got me started

I will continue to watch

I couldnā€™t get the docker working for some reason so I just compiled on the Orin nano

Is it easy to process multiple video streams ? I guess I just fire up a few detectnets with different video sources ?

Thatā€™s great @scott104!, glad that you found the videos useful šŸ˜Š

To process multiple video streams, yes you could just run multiple instances of detectnet/detectnet.py, or you could make your own script that has multiple videoSource/videoOutput interfaces and uses one detectNet model (or multiple models if you need them). Technically the later will be more efficient for deploying an application.

Hi Dusty

A quick question - it seems that if i pass in arguments like --input-width=xx and --input-height=xx then the stream shows ok but it doesnā€™t save the file anymore to diskā€¦

I am not sure why, maybe i am missing somethingā€¦ ?

It seems if i pass any of input arguments ive tried to ā€œdetectnet.py --model-dashcamnetā€¦ā€ then the video isnā€™t actually saved to disk anymoreā€¦

Hi @scott104, can you provide me the command-line you are running and the console log from when you run it?

Are you running it like detectnet.py --model=dashcamnet --input-width=xx --input-height=yy /dev/video0 my_file.mp4 ?

1 Like

hi dusty
i dont know what on earth i was doing wrong, but its now working after copying over your formatā€¦ maybe i missed something.

thanks!

@scott104 due to the ad-hoc nature of the jetson-inference command-line parsing, the optional arguments (like --model=xyz, --input-width=N, ect) should use the = and not have the value separated by spaces (otherwise they can be interpreted as positional arguments). And with the Python versions, the position arguments should come after the other optional ones.

1 Like

ok got it thanksā€¦

one last question, i cannot find any up to date references for streaming 2 or more cameras simultaneously and displaying in a grid fashion side by sideā€¦

I tried this but get multiple errors

Implementing Real-Time, Multi-Camera Pipelines with NVIDIA Jetson | NVIDIA Technical Blog

another additional questions :)

I edited detectnet.cpp in /jetson-inference/examples/detectnet and then rebuilt the project in the jetson-inference/build folder using cmake ā€¦ and make commandsā€¦

but when i re-run detectnet it doesnt seem to change?

For ow im just editing the ā€œhelpā€ to see if my changes are visible

I managed to edit it successfully, it will now open and run dashcamnet on 2 camerasā€¦

However, I am not able to save both streams to disk - is there a special approach to saving both streams to disk?

@scott104 do you want to save both cameras to the same video file, or independent video files? And do you want to save the incoming unprocessed video, or the post-processed video with the detection bounding boxes?

If you want to save both cameras side-by-side to one video file, on your composted image you can just use one videoOutput (basically as it is today in detectnet.py where you specify my_output_video.mp4 on the command-line or hardcode it)

If you want to save independent video files, then you will need a videoOutput interface for each of your videoSource interfaces, each having been opened with different output filenames. This week I added the ability to pass in a videoOptions dict to each interface which makes it easier to programmatically instantiate multiple interfaces without dealing with the command-line: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#python

1 Like

Hi!

Well, I have currently tried to save to a single output file like you suggest, but this just saved the video from camera one , not camera two

I was creating two input files and two output filesā€¦:

Are you saying if I only have 1 output file in my detectnet.cpp it will put the videos side by side ?

As I was stuckā€¦. I then added a second ā€œoutputā€ file name to detectnet when I call it on the command line (likeā€¦ myvideo1.mp4 myvideo2.mp4

This seems to save two files, of the same size. But only the first one (from camera one) will actually openā€¦: the other is a .mp4 file but it has no properties and wonā€™t open

You would need to compost the videos side-by-side first (before passing them to videoOutput.Render()). You can use the cudaOverlay() function for that: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#overlay

For an example of that, see the segnet or depthnet samples (those compost two images side-by-side before rendering)

detectnet as-is isnā€™t setup to output multiple independent video files simultaneously, but you could try adding it by creating multiple videoOutput interfaces in the code. Iā€™m actually not sure how itā€™s creating that second myvideo2.mp4ā€¦

1 Like

Oh right! Thanks

I didnā€™t know about compost , I will check that outā€¦

If I wanted to save two seperate files to disk, is that not currently possible then?

Itā€™s kind of half working with detectnet but the second file doesnā€™t seem to be properly finalisedā€¦ as suchā€¦

My eventual goal is to overlay a HUD with values that update frame by frameā€¦

Can I use CUDA for that? Or can I not update the overlays on the fly during a video stream?

It should be possible, you would just need to create two independent videoOutput instances. The example detectnet code is purposely kept simple and is for one stream, so you would need to modify it.

Sure yes, I have some basic shape drawing functions in CUDA here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#drawing-shapes
Also you can see how to draw text using cudaFont in the imagenet/imagenet.py examples

Thanks I will have another go tomorrow and explore CUDA ā€¦

For example if I want to update ā€œspeedā€ if it was a vehicle and have it as an overlayā€¦

I could use the cudaOverlay to overlay a static background to the speedo as a png, then update the speed text by calling cudaOverlayText and somehow pulling in the real time speed info from my sensor ?

Does it sound logical ?

Yep, I think you have the right idea there šŸ‘ good luck!

me again sorryā€¦

whats the appraoch to modifying these lines to accept the video stream?

// load the input images
if( !loadImage(ā€œmy_image_a.jpgā€, &imgInputA, &dimsA.x, &dimsA.y) )
return false;

if( !loadImage(ā€œmy_image_b.jpgā€, &imgInputB, &dimsB.x, &dimsB.y) )
return false;