Deepstream 6.0 Internal data stream error

hi, i am new to deepstream 6.0, i have model plan engine converted from pytorch.
model plan file is successfully loaded by deepstream-segmentation-app, but not able to get inference output of random input jpeg format image.

here is the below error

i ran below command to test deepstream-segmentation-app

deepstream-segmentation-app segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg
root@33c18080f95b:/opt/nvidia/deepstream/deepstream-6.0/samples# deepstream-segmentation-app segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg,
0:00:02.072053611    46 0x561dc5e7db60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/model.plan
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x-1x-1         min: 1x3x224x224     opt: 1x3x1026x1282   Max: 1x3x1440x2560
1   OUTPUT kINT32 output          1x-1x-1         min: 0               opt: 0               Max: 0

0:00:02.072147088    46 0x561dc5e7db60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/model.plan
0:00:02.087519621    46 0x561dc5e7db60 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:segmentation_config_semantic.txt sucessfully
Running...
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)512, height=(int)512
0:00:02.348382995    46 0x561dc4b82940 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:02.348413055    46 0x561dc4b82940 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Deleting pipeline

i have also tested with sample app (/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-segmentation-test) model available in samples, but its giving me same above error.

attached config file
segmentation_config_semantic.txt (3.7 KB)

when i tried with video - from sample streams with below command

deepstream-segmentation-app segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4

i got below error

root@33c18080f95b:/opt/nvidia/deepstream/deepstream-6.0/samples# deepstream-segmentation-app segmentation_config_semantic.txt /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4,
0:00:02.131119436    59 0x560298e41b60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/model.plan
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x-1x-1         min: 1x3x224x224     opt: 1x3x1026x1282   Max: 1x3x1440x2560
1   OUTPUT kINT32 output          1x-1x-1         min: 0               opt: 0               Max: 0

0:00:02.131206963    59 0x560298e41b60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/model.plan
0:00:02.147493739    59 0x560298e41b60 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:segmentation_config_semantic.txt sucessfully
Running...
ERROR from element jpeg-parser: No valid frames found before end of stream
Error details: gstbaseparse.c(3603): gst_base_parse_loop (): /GstPipeline:dstest-image-decode-pipeline/GstBin:source-bin-00/GstJpegParse:jpeg-parser
Returned, stopping playback
Deleting pipeline

please help and provide correct resolution with instructions, how to get correct segmented output?

Hi @amirkhan4 , did you run the pipeline on dgpu or jetson?

@yuweiw ubuntu based T4 GPU vm, dgpu

Ok, got it.
We are analysing the issue now and will let you know if we have a result ASAP.

1 Like

Hi @amirkhan4,
1.The deepstream-segmentation-app cannot support mp4 or any other file now, it’s only for mjpeg video.
2.cause you are in a vm env, so your vm’s display system may not be good. Could you have a test for that?
use below pipeline to see if it can play normaly in your env to test your display env. If it cannot display well, your vm’s DISPLAY system maybe not good.

gst-launch-1.0 filesrc location="mp4 video location" ! qtdemux ! h264parse ! nvv4l2decoder !  nveglglessink

Also, you can set GST_DEBUG variable and show us the error log.

export GST_DEBUG=3

Hi @yuweiw
thank you for your response, i understand the resolution provided is for displaying output if i had direct display system, but that i cannot check because of bad display of aws ec2 gpu based vm.

i need to store the deepstream processed output in video format if input is video or image format if input is an image, but that also not work, do i need to change anything in config file provided. please also check if that is correct?

Hi, @amirkhan4 . As i said above,the deepstream-segmentation-app is only suport mjpeg video and it also need a good display env at present.

But if you want to use your vm env with bad display system to test this app, you should save the inference result to a file instead of displaying it.

Also there are 2 ways to try to suit your needs.
1.You can change the source code to build your own pipeline to parse any format video or image and save the video to a file. (read the README and build the app)

/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-segmentation-test/deepstream_segmentation_app.c
/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-segmentation-test/README

2.You can use gst-launch-1.0 command to build your own pipeline to parse any format video or image and test it.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.