• Hardware Platform (Jetson / GPU)
NVIDIA Jetson Nano (Developer Kit Version)
• DeepStream Version
5.0.0
• JetPack Version (valid for Jetson only)
4.4[L4T 32.4.3]
• TensorRT Version
7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
I don’t know how to find GPU Driver Version on Jetson nano
I use deepstream-app.c example in /sources/apps/sample_apps/ , and I write some custom code in all_bbox_generated() function in order to record bbox infomation(x,y,w,h) in .csv file when bbox is at specific area of image.
I want to modify file sink logic to record only when detect object link person or vehicle, but I don’t know where the code that controls file sink record, is that possible?
1 Like
bcao
November 15, 2020, 5:06am
3
Do you want to save the video or just the bbox info, can you refer deepstream_app.c → write_kitti_output
Hi, bcao
I just want to save the video which only contains detected object, because the file sink will record video without any conditions.
bcao
November 24, 2020, 9:49am
5
I use it with deepstream-test5, but it seems not working.
here is config file:
# Copyright (c) 2019 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=2
columns=2
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtspt://admin:admin@192.168.168.228:554/media/video1
num-sources=1
rtsp-reconnect-interval-sec=1
latency=0
#select-rtp-protocol=4
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
smart-record=1
smart-rec-duration=10
smart-rec-video-cache=5
smart-rec-start-time=0
smart-rec-default-duration=10
smart-rec-container=0
[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtspt://admin:admin@192.168.168.224:554/media/video1
num-sources=1
rtsp-reconnect-interval-sec=1
latency=0
#select-rtp-protocol=4
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
smart-record=1
smart-rec-duration=10
smart-rec-video-cache=5
smart-rec-start-time=0
smart-rec-default-duration=10
smart-rec-container=0
[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtspt://admin:admin@192.168.168.225:554/media/video1
num-sources=1
rtsp-reconnect-interval-sec=1
latency=0
#select-rtp-protocol=4
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
smart-record=1
smart-rec-duration=10
smart-rec-video-cache=5
smart-rec-start-time=0
smart-rec-default-duration=10
smart-rec-container=0
[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtspt://admin:admin@192.168.168.230:554/media/video1
num-sources=1
rtsp-reconnect-interval-sec=1
latency=0
#select-rtp-protocol=4
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
smart-record=1
smart-rec-duration=10
smart-rec-video-cache=5
smart-rec-start-time=0
smart-rec-default-duration=10
smart-rec-container=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
qos=1
nvbuf-memory-type=0
overlay-id=1
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=4000000
output-file=out.mp4
source-id=0
#rtsp-port=8554
[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
#udp-port=5400
source-id=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
labelfile-path=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/labels.txt
model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;0
bbox-border-color1=0;1;0;0
bbox-border-color2=0;1;0;1
bbox-border-color3=1;1;0;0
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt
[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0
[tests]
file-loop=0
bcao
December 1, 2020, 7:43am
7
Can it generate video with your config?
No, it did not generate any video
bcao
December 8, 2020, 6:48am
9
smart-rec-dir-path=<path of directory to save the file>
Path of directory to save the recorded file. By default, the current directory is used.
Can you specify these 2 configs and make sure the path can be written
I add smart-rec-dir-path=/home/user/nvr/
in one of sources and only enable fakesink, and do chmod -R 777 /home/user/nvr/
, and still no any video generated.
bcao
December 8, 2020, 8:30am
11
Ok, may I know if you change the original deepstream-test5 app source code, can you try smart-record=2
and see if the video is generated, for the video generation logic pls refer smart_record_event_generator() in deepstream_source_bin.c