deepstream-test1 save to output file issue

I am trying to save the results of deepstream test1 to a .mp4 file, i have added the below into the config
[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

i have added g_object_set (G_OBJECT (sink), “location”, “out.mp4”, NULL); to the code and changed
sink = gst_element_factory_make (“nveglglessink”, “nvvideo-renderer”); to
sink = gst_element_factory_make (“filesink”, “filesink”);

the program runs fine with the detection results displayed in text, but the out.mp4 is not playable with only 91kb in size

I can save to output file fine with deepstream-app

just wondering what am i missing in deepstream test1?

Thanks

HI
if you want to use test1 to save result to file, you can refer code, sources/apps/apps-common/src/deepstream_sink_bin.c:268 static gboolean
create_encode_file_bin (NvDsSinkEncoderConfig * config, NvDsSinkBinSubBin * bin)

also you can refer deepstream-app which can meet your requirements, change sink1 group in config samples/configs/deepstream-app/source***.txt

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

@Amycao I have the same problem, and I added:

g_object_set (G_OBJECT (sink), “location”,“out.mp4”,“sync”, “0”, “async”, FALSE , NULL );

in the code from the deepstream-test2 and still the file that is created as output has some kbs

please refer to above code.

I already did that and the problem is that I can’t figure which parts of the code should I use from create_encode_file_bin (NvDsSinkEncoderConfig * config, NvDsSinkBinSubBin * bin) so I can encode the video properly since there are no comments in the code.

Maybe you can point me in the right direction by providing some kind of example with a little more details.

@Amycao I’ll be more specific about my problems, maybe this will help.

From the method create_encode_file_bin (NvDsSinkEncoderConfig * config, NvDsSinkBinSubBin * bin) it seems that I need a filesink and an encoder that will save the output to a file. So far so good, but here are my problems.

  1. Do I need to add the codecparse and mux to the encoder?
    NVGSTDS_LINK_ELEMENT (bin->encoder, bin->codecparse);
    NVGSTDS_LINK_ELEMENT (bin->codecparse, bin->mux);
    NVGSTDS_LINK_ELEMENT (bin->mux, bin->sink);

  2. Where, in the pipeline, should I link the encoder (and the codecparse, mux in case I need to use them as well).

  3. When creating the encoder there is NVDS_ELEM_ENC_MPEG4 and I have no idea what that means in terms of gstreamer.

  4. Do I also need to set the profile, iframeinterval and bitrate for the encoder?

Thanks for your time.

Hi octa,marian,

Please open a new topic for your issue.

Thanks

Hello @kayccc, I already did that on 16th Deepstream sample apps output to file but I got no answer

Hi octa,marian,

It seems you did not open topic at the right forum, I have moved it in.

Just a tiny comment: It seems that many people struggle to run the samples. Not everybody has an Ampere GPU at her desk but instead need to run remotely on a server. Without X11 redirection it is not possible to easily explore the beauty of NVidia’s ML solutions without re-writing the samples so that they either write to file or stream over the network. I wonder how hard it would be for Nvidia to integrate a workflow into all samples that easily allows to get started with inferencing on a remote server. How hard would it be to have a common implementation (fully tested by Nvidia) for all the samples that allows to select if you want to render locally, write to file or stream over RTSP - for the sake of customer satisfaction :)

1 Like

We do have the sample, which can render to display, save to file, streaming if you run from remote server. all this function you can set through deepstream-app configuration sink group.