Getting 0 bytes .mp4 file after running a config file in deepstream 6.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU (Tesla)
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only): 495.44
• Issue Type( questions, new requirements, bugs): Getting 0 bytes .mp4 file after running a config file
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://…/…/streams/sample_1080p_h264.mp4
num-sources=4
#drop-frame-interval=2
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=400000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0

set below properties in case of RTSPStreaming

rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
buffer-pool-size=4
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=…/…/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tracker]
enable=1

For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively

tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so

ll-config-file required to set different tracker types

ll-config-file=config_tracker_IOU.yml

ll-config-file=config_tracker_NvDCF_perf.yml

ll-config-file=config_tracker_NvDCF_accuracy.yml

ll-config-file=config_tracker_DeepSORT.yml

gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[secondary-gie0]
enable=1
model-engine-file=…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt

[secondary-gie1]
enable=1
model-engine-file=…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carcolor.txt

[secondary-gie2]
enable=1
model-engine-file=…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carmake.txt

[tests]
file-loop=0

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description): /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

PS: I have installed DeepStream in Google Colab, because the ubuntu version in colab is 18.04.05, gpu is Tesla K80 and driver mentioned above. The installation was successful, but after running a config (with no error), I am getting 0 bytes .mp4 file.

Can you run “gst-inspect-1.0 nvinfer” to check whether deepstream is installed correctly?

Ran the command and it gave me information about deepstream. I hope that means deepstream is succesfully installed.

I just now noticed another thing. The Primary_Detector folder under models has no .engine file, but it is mentioned in the config_infer_primary.txt
Is it normal, because I got no error regarding the fact that there is no .engine file? If no, please tell how to solve it. DeepStream is installed using .deb file for Tesla gpu’s
(screen shot given below)

Screenshot from 2021-11-17 20-39-52

“model-engine-file” setting will be check first, if there is no engine file in that path, gst-nvinfer will generate engine file. So you can find the engine file after the first time successfully running the app.

To identify the problem, can you provide complete log? We can not reproduce the failure you met in out side.

When I ran this code:
deepstream-app -c /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source30_1080p_dec_preprocess_infer-resnet_tiled_display_int8.txt

I got the following two warnings:

(gst-plugin-scanner:9687): GStreamer-WARNING **: 16:19:24.176: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:9687): GStreamer-WARNING **: 16:19:24.198: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory

As per the Readme file, the above warnings are harmless.

After this warning, I got a .mp4 file of size 0 bytes as output, because the sink type was set to type 3 (save file)

Can you provide the complete log?

If you can set the environment variable by “export GST_DEBUG=3” before you run the case, the log will be more useful.

I saw that it was not working with google colab free version. But it is now working after I upgraded it to colab pro. May be the colab free version lacks some functionality that’s provided by the pro version.
(May be the gpu in free version is a problem, because as per the google, colab pro provides better gpu, larger disk space and longer runtimes. There’s no mention about any other special functionality provided by the pro version.)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.