Hi,
I am using jetson nano for inferencing with 4 rtsp cctv camera and running deep stream peoplenet on it. i am achieving around 25fps. and in another thread i am running video storage on motion detection script. I have no issues with inferencing but i have issue while doing video storage. Every 1 minute 10 seconds of video is skipped very fast. Kindly let me know, which way i can use to store the videos in localstorage without skip and fast running issues.
Hi,
Please follow the README to set up the environment:
/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/README.md
The sample config file is deepstream_app_source1_peoplenet.txt. By default is it single source. Please try to enable sink1
for saving to a mp4 and see if the file can achieve target frame rate. And then try 2 sources, 3 sources, 4 sources. To check how many sources trigger frame rate dropping.
Hi,
Thanks for your response. My need is to not store the entire video stream. i want to store the video only if motion detected. It is for CCTV camera storage application. Is there any way in it?
Hi,
For this use-case, you may enable smart recording for a try. Please take a look at
Smart Video Record — DeepStream 6.1.1 Release documentation
Hi,
The use case u gave is in C++… Can i get anything in python?
Hi,
Currently smart recording is enabled in C. For using python, since python binding is open source, you may check how the function works in C, and port the function to python binding.
@DaneLLL I have referred that C++ api and tried implementing on python. i have attached my script below. I can able to store the video. after video storage file size is showing correctly. But i couldn’t open the video file.
four-channel-multistream_deepstream_record copy.py (15.1 KB)
Hi,
You may try matroskamux to mux into mkv. It needs to send EoS for muxing into a valid mp4.
Hi,
Tried following changes but still same issue is there. Kindly let me know if I tried anything wrong.
Try 1:
muxer = Gst.ElementFactory.make(“matroskamux”, “mux”)
sink = Gst.ElementFactory.make(“filesink”, “filesink”)
sink.set_property(‘location’, “output.mp4”)
Try 2:
muxer = Gst.ElementFactory.make(“matroskamux”, “muxer”)
sink = Gst.ElementFactory.make(“filesink”, “filesink”)
sink.set_property(‘location’, “output.mp4”)
Try 3:
muxer = Gst.ElementFactory.make(“qtmux”, “mux”)
sink = Gst.ElementFactory.make(“filesink”, “filesink”)
sink.set_property(‘location’, “output.mp4”)
Hi,
You may check if the constructed pipeline can be successfully run in gst-launch-1.0 command. May refer to:
Nvmultistreamtiler/nvstreamdemux with omxh264enc won't work - #5 by DaneLLL
And try like:
… ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=a.mkv
… ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=a.ts
Hi,
I tried your suggestion. Now video is storing and also playing properly. But now problem is that, if i use 4 camera streaming, single mp4 file only created for 4 cameras like collage format. I tried to store the video separately for each camera but couldnt acheive it. Can u suggest me a solution for this?
four-channel-multistream_deepstream_record_copy.py (14.6 KB)
Hi,
For this use-case, you would need to replace nvmultstreamtiler with nvstreamdemux plugin. It is demonstrated in deepstream-app and the source code is in
/opt/nvidia/deepstream/deepstream-6.0/sources/apps
If you run deepstream-app, you can modify config file to run the use-case. Here is an example:
How to save output videos deepstream app to individual files? And what is need to change in congif f... - #4 by DaneLLL
There is no existing python sample, so would need to refer to the C sample and apply the same to python.
Hi,
Now i can able to save individual camera separately. But i wanna add motion detection to the storage. I am already running the person detection inference in the existing code… But now i want to store the video only if motion detected… Can you tell me where i need to add motion detection script? or is there any other option for this?
Hi,
You may try valve component. After inference and event detection module, add gst valve component. And after that add encoder+mux+filesink in the pipeline . By default drop buffers by setting “drop-all “ property on valve component. When motion is detected, do not drop buffers, and encode the file.
Hi,
We can able to integrate inference component followed by valve component and followed by storage. But we dont know how to add event detection module in between inference and valve component.
For value component we need to pass true or false dynamically from the inference component. But searched a lot in internet for this but couldnt get any proper idea.
Hi,
Please find the attached code. We tried saving the video using valve and probe function.
line number 106 and 107, i had put condition to save the video during that time only. but it is not saving. If i put “valve1.set_property(‘drop’, False)” outside then it is working. Please let me know why it is not working inside the function.
and also if i reverse the condition like, outside “valve1.set_property(‘drop’, False)” and inside function "valve1.set_property(‘drop’, True)… this case is working. but the reverse case is not working.
deepstream_record.py (15.2 KB)
Hi,
Does it work if you set drop=False by default, and dynamically set drop=True and then False like:
if frame_number > 300 and frame_number < 450:
valve1.set_property('drop', True)
if frame_number > 450 and frame_number < 600:
valve1.set_property('drop', False)
Case 1: Set drop = False by default, and
if frame_number > 300 and frame_number < 400:
valve1.set_property(‘drop’, True)
if i use this, that particular frames are dropped and working fine.
Case 2: Set drop = False by default, and
if frame_number > 300 and frame_number < 500:
valve1.set_property(‘drop’, True)
this is the error i got if i drop more frames… kindly find below attachment