Deepstream gst-launch-1.0

Dear People,

I do have a jetson nano orin

I saw in README of the deepstreem-segmentation-app that the pipeline is:

For Jetson:
filesrc → jpegparse → nvv4l2decoder → nvstreammux → nvinfer/nvinferserver (segmentation)
nvsegvidsual → nvmultistreamtiler → nv3dsink
y se hizo este gst-launch

Then, a gst-launch-1.0 was created

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 jpegparse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual ! nvmultistreamtiler ! filesink location=seg_output.mp4

The I got the following error:

ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data stream error.

ERROR: pipeline doesn’t want to preroll.

Regards and thank you

Hi Alfredo,

I think you are missing a ! between the filesrc and the jpegparse:

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! jpegparse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual ! nvmultistreamtiler ! filesink location=seg_output.mp4

In case you get any other issues, a good starting point for debugging is to use GST_DEBUG=3 before the pipeline, that way both errors and warning will be displayed in the terminal.

Also, you can find reference GStreamer/Deepstream pipelines here:
https://developer.ridgerun.com/wiki/index.php/DeepStream_pipelines

Hi Emmanuel,

Yeah I corrected the use (!), however I still get the same error:

gstbaseparse.c(3666): gst_base_parse_loop (): /GstPipeline:pipeline0/GstJpegParse:jpegparse0:

It seems to be the executable is never called, because I I inserted a printf() just after the main. It seems to be it is a problem of gst-launch and the jpegparser.

I have tried this pipeline

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4 ! jpegparse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=640 height=360 batch-size=1 ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual ! nvmultistreamtiler ! nv3dsink

Because in the README says:

For Jetson:
filesrc → jpegparse → nvv4l2decoder → nvstreammux → nvinfer/nvinferserver (segmentation)
nvsegvidsual → nvmultistreamtiler → nv3dsink

Maybe I formulated wrong

How do I use GST_DEBUG=3?

Thanks

well, I manged to run GST_DEBUG=3 and I got

ERROR GST_PIPELINE grammar.y:738:gst_parse_perform_link: could not link jpegdec0 to nvv4l2decoder0
WARNING: erroneous pipeline: could not link jpegdec0 to nvv4l2decoder0

Hi,
For further debugging, we suggest break down the pipeline. You may try this first:

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nv3dsink

See if video playback is run successfully.

Hi, It runs successfully but in slow motion, the frame rate of the video is 25Hz.

how can I insert the

config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt

So the segmentation works

I have tried this pipeline, it works somehow, I do not get the resolution of 640x360, the resolution is much bigger and the framerate is quite slow. Also it is quite unstable, e.g. sometimes it runs other times it does not. What can be wrong, sorry to ask, I am new in this field. I think the gst-launch goes over the dstest_segmentation_config_semantic.txt and not over the executable. It is quite confusing,

  • Maybe the file dstest_segmentation_config_semantic.txt needs to be modified to get a good result.

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=360 ! nvvideoconvert ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual width=640 height=360 ! nvmultistreamtiler rows=1 columns=1 ! nv3dsink

By the way, with this command I can see the video with a good frame rate of 25hz.

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! qtdemux ! jpegparse ! nvv4l2decoder mjpeg=1 ! nv3dsink

well, just to say that the following pipeline works, generates the mp4 file with jpeg encoder.

There is 1 issue though:

  • I have set the resolution with 640x360 but in the video I get 1920x1080, I do not know why? and how can I change that.

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=360 ! nvvideoconvert ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual width=640 height=360 ! nvmultistreamtiler rows=1 columns=1 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! jpegenc ! jpegparse ! qtmux ! filesink location=output2.mp4

Thank you

Hi,
Please check the setting in

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt

By default the setting is 1920x1080. You may customize it per your use-case.
Or use nvvideoconvert plugin to downscale back to 640x360 after nvinfer plugin.

Hi, thank you for your answer, however I do have two questions.

  • How do I customize the

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt

To my user-case? the only thing I can see in that file is the line

infer-dims=3;512;512

When I set

infer-dims=3;640;360

I get an error.

  • When you say nvvideoconvert plugin to downscale back to 640x360 after nvinfer plugin.

It is not the line I have

! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=360 ! nvvideoconvert ! nvinfer

Or how it shall be done.

Thank you.

Hi,
Please try the command and check if output resolution is 1280x720:

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 enable-padding=0  ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt batch-size=1 unique-id=1 interval=0 ! nvsegvisual ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0 ! nveglglessink sync=0

If the command works fine, please replace it with your video file and change resolution to 640x360.

Hi, Thank you for your replay.

I do get an error:

ERROR: from element GstPipeline:pipeline0/GstJpegParse:jpegparse0: Internal data stream error.

The pipeline that works but does not change the resolution is:

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=360 ! nvvideoconvert ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual width=640 height=360 ! nvmultistreamtiler rows=1 columns=1 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=0

Hi,

For information, is this source file in 640x360? It is better to match the resolution with the source. For example, sample_720p_mjpeg.mp4 is in 1280x720, so the resolution in the command is set to 1280x720

Hi, yeah that file video2.mp4 is 640x360.

Hi,

Correct me if I am wrong, I guess your name is Danel, right?

Well based on your previous suggestion:

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 enable-padding=0  ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt batch-size=1 unique-id=1 interval=0 ! nvsegvisual ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0 ! nveglglessink sync=0

The following two pipelines were updated, it is worth to say that I hardly understand the function of some plugins and the structure. It was done by trail and error.

//////THIS WORKS, IT CHANGES THE RESOLUTION FROM THE DEFAULT 1920x1080 TO 640x360/////////

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4 ! jpegparse ! nvv4l2decoder mjpeg=1 ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=360 enable-padding=0 ! nvvideoconvert ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt batch-size=1 ! nvsegvisual width=640 height=360 ! nvmultistreamtiler rows=1 columns=1 width=640 height=360 gpu-id=0 nvbuf-memory-type=0 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=0

//////THIS WORKS, IT CHANGES THE RESOLUTION FROM THE DEFAULT 1920x1080 TO 640x360 AND SAVE IT INTO MEMORY/////////
gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4 ! qtdemux ! jpegparse ! nvv4l2decoder mjpeg=1 ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=360 enable-padding=0 ! nvvideoconvert ! nvinfer config-file-path = /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test/dstest_segmentation_config_semantic.txt ! nvsegvisual width=640 height=360 ! nvmultistreamtiler rows=1 columns=1 width=640 height=360 gpu-id=0 nvbuf-memory-type=0 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! jpegenc ! jpegparse ! qtmux ! filesink location=/output0.mp4

  • Based on my lack of knowledge about building gst-pipelines, would you mind to taking a look to the previous pipelines and tell me if something is wrong or it can be improved.

  • Is it possible instead of saving the whole video, output.mp4, to save the frames into memory?, like frame1.jpg, frame2.jpg etc.

Thank you.

Hi I think I found an issue, the segmented images with resolution 640x360 some how do not get the whole screen, as you can see in the picture.

$00001

Yeah, it is 640x360 resolution but the image is some how cut off.

How can I solve that issue.

As you can see the same image but with the command:

./deepstream-segmentation-app dstest_segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream/samples/streams/

image2

Thank you

Hi,
Please try the command:

./deepstream-segmentation-app dstest_segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4

If above command works, you can refer to the source code to construct gst-launch-1.0 command.

Hi,

Thank you for your replay,

The command.

./deepstream-segmentation-app dstest_segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4

works quite well,

How can I refer to the source code to construct gst-launch-1.0 command?

Thank you

Just wondering whether it might be possible somehow to insert the following line

./deepstream-segmentation-app dstest_segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream/samples/streams/video2.mp4

into the gst-launch-1.0

  • If it is possible would be great