Python in DeepStream: error {Internal data stream error} while running deepstream-test1

I am trying to run python based apps.

Shared on below links by Nvidia.
DS python apps on GitHub: GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
Python bindings downloads: https://developer.nvidia.com/deepstream-download
How-To guide: deepstream_python_apps/HOWTO.md at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

I have untar the python bindings as shown below.

$ xf ds_pybind_0.5.tbz2  -C /opt/nvidia/deepstream/deepstream-4.0/sources

While running deepstream_test_1.py I am receiving Internal data stream error.

nvida@nvidia-334:/opt/nvidia/deepstream/deepstream-4.0/sources/python/apps/deepstream-test1$ python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline


Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:01.189695840 13858     0x35066b20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.190069184 13858     0x35066b20 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:34.889313152 13858     0x35066b20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Creating LL OSD context new
Frame Number=0 Number of Objects=5 Vehicle_count=3 Person_count=2
0:00:35.606230016 13858     0x34c9ce80 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:35.606324800 13858     0x34c9ce80 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)

It is executing only Zero frame.

Frame Number=0 Number of Objects=5 Vehicle_count=3 Person_count=2

What is causing this Internal data stream error. Any leads will be great!

Thank you.

2 Likes

Does the C version of deepstream-test1 app work for you?
/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test1

Also, are you running this over ssh with X forwarding? That doesn’t work with deepstream.

I also hit this same issue, I run this over SSH terminal (no X forwarding) and tried to remove all things OSD, but hit the same error https://gist.github.com/rilut/339caddd22b843a00648de6097f23558

Does the C version of deepstream-test1 app work for you?
No. This is also giving similar error.

$ ./deepstream-test1-app sample_1080p_h265.h264
Now playing: sample_1080p_h265.h264

Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:01.113532064  8891   0x558ebe1f20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.113885888  8891   0x558ebe1f20 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:33.244000608  8891   0x558ebe1f20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Creating LL OSD context new
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
0:00:33.597796992  8891   0x558e44ed90 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:33.597890144  8891   0x558e44ed90 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
Deleting pipeline

Also, are you running this over ssh with X forwarding?
I am running it on Jetson TX2, and doing SSH to Jetson TX2 from PC.

X11Forwarding is set to no in sshd_config in Jetson Tx2.

$ grep "^X11Forwarding" /etc/ssh/sshd_config
X11Forwarding no

How should I resolve X forwarding? and I generally uset the DISPLAY environment variable.

Thanks.

Please check your ssh client side settings. Do not use “-X” when connecting.

I am running it on Jetson TX2, and doing SSH to Jetson TX2 from PC.

In this case, ssh into TX2 without “-X”. Once in the ssh session, set display:
$ export DISPLAY=:0

After that run the deepstream app. The display output will only go to the Jetson’s display, though. You can VNC in to see it.

If the Jetson is headless, then we’ll need to go find more info and get back to you.

1 Like

@zhliunycm2
yes, Jetson is headless. No display/monitor is connected to it.

On headless systems you should use RTSP sink or file sink, as described here:
https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_faq.html
Under “How can I display graphical output remotely over VNC? How can I determine whether X11 is running?”

The Python test apps currently don’t have these sink types set up. You can do this:

  1. Switch to fakesink in the app. This won’t let you see the graphical output but should allow the app to run.
    https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/43861859fa8aca1f17cf752a208c12b0c8b7d287/apps/deepstream-test1/deepstream_test_1.py#L190
    Change
sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")

to

sink = Gst.ElementFactory.make("fakesink", "fakesink")
  1. If you can play with the gst-python parts, try replacing fakesink with RTSP streaming or file sink. There is C sample code in /opt/nvidia/deepstream/deepstream-4.0/sources/apps/apps-common/src/deepstream_sink_bin.c.

  2. If #2 doesn’t work for you, then wait for sample code to become available.

2 Likes

Also, is it possible to attach a display for now?

I will try #2 and see what happens.

Also, is it possible to attach a display for now?
As per the requirements of the application I am working on, it is not possible to attach a display on Jetson TX2.

I tried the #1 it didn’t work.
Could you please elaborate on #2 more?

2 Likes

For #2, you can replace the sink element with either an RTSP subpipeline or a filesink subpipeline. The /opt/nvidia/deepstream/deepstream-4.0/sources/apps/apps-common/src/deepstream_sink_bin.c sample shows how to make such subpipelines in C.

The RTSP path is shown in start_rtsp_streaming() and create_udpsink_bin(). It’s a bit involved so I haven’t had a chance to port it to Python.

The filesink path is shown in create_encode_file_bin(). I have a python version of it below. This code can replace the main function in deepstream_test_1.py. It’s very basic with configs hardcoded (container = mp4, bitrate = 2000000, sync = 1, output location = ./out.mp4 etc). You can tweak those to suit your use case.

When running the pipeline, you should see object counts for each frame. Upon exit, there should be an out.mp4 in PWD.

def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    print("Creating muxer \n")
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    print("Creating nvinfer \n")
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    # Use convertor to convert from NV12 to RGBA as required by nvosd
    print("Creating converter \n")
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    print("Creating OSD\n")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    #if is_aarch64():
    #    transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating Queue \n")
    queue = Gst.ElementFactory.make("queue", "queue")
    if not queue:
        sys.stderr.write(" Unable to create queue \n")

    print("Creating converter 2\n")
    nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "convertor2")
    if not nvvidconv2:
        sys.stderr.write(" Unable to create nvvidconv2 \n")

    print("Creating capsfilter \n")
    capsfilter = Gst.ElementFactory.make("capsfilter", "capsfilter")
    if not capsfilter:
        sys.stderr.write(" Unable to create capsfilter \n")

    caps = Gst.Caps.from_string("video/x-raw, format=I420")
    capsfilter.set_property("caps", caps)

    print("Creating Encoder \n")
    encoder = Gst.ElementFactory.make("avenc_mpeg4", "encoder")
    if not encoder:
        sys.stderr.write(" Unable to create encoder \n")

    encoder.set_property("bitrate", 2000000)

    print("Creating Code Parser \n")
    codeparser = Gst.ElementFactory.make("mpeg4videoparse", "mpeg4-parser")
    if not codeparser:
        sys.stderr.write(" Unable to create code parser \n")

    print("Creating Container \n")
    container = Gst.ElementFactory.make("qtmux", "qtmux")
    if not container:
        sys.stderr.write(" Unable to create code parser \n")

    print("Creating Sink \n")
    sink = Gst.ElementFactory.make("filesink", "filesink")
    if not sink:
        sys.stderr.write(" Unable to create file sink \n")

    sink.set_property("location", "./out.mp4")
    sink.set_property("sync", 1)
    sink.set_property("async", 0)

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(queue)
    pipeline.add(nvvidconv2)
    pipeline.add(capsfilter)
    pipeline.add(encoder)
    pipeline.add(codeparser)
    pipeline.add(container)
    pipeline.add(sink)
    #if is_aarch64():
    #    pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    #if is_aarch64():
        #nvosd.link(transform)
        #transform.link(sink)
    #else:
    nvosd.link(queue)
    queue.link(nvvidconv2)
    nvvidconv2.link(capsfilter)
    capsfilter.link(encoder)
    encoder.link(codeparser)
    codeparser.link(container)
    container.link(sink)

# create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)
1 Like

If you don’t want visual output, use fakesink. It should work if you unset DISPLAY.

Hi,

I have implemented code above with slight modification to get usb camera source. It seems to work properly as .mp4 file is generated. However when I try to play it back in the Jetson Nano, it does not work and the error is “This file contains no playable streams”.

It seems the issue is because the EOS has not been properly sent as stated in the tread below. How can I send that event using the python code provided?

https://devtalk.nvidia.com/default/topic/1065388/deepstream-sdk/save-output-video-for-deepstream-4-0-1/

Regards,
Alberto

Hi Alberto, are you stopping the stream with ctrl-C or some other signal? Your app can capture the signal and do the following:

  1. Send EOS on the sink’s sinkpad
    sink.get_static_pad(“sink”).send_event(Gst.Event.new_eos())
  2. Set pipeline state to null
    pipeline.set_state(Gst.State.NULL)

Hi there

Could you please tell me how can I unset DISPLAY?
There is the following line in the main():
nvosd = Gst.ElementFactory.make(“nvdsosd”, “onscreendisplay”)

Should I change the arg to anything else?

Thank you

1 Like

Hi roya.kh2828, to unset DISPLAY, just do this in your shell:
$ unset DISPLAY

If you want to skip visual output:

  • Use fakesink for your sink element instead of EGL sink:
    sink = Gst.ElementFactory.make(“fakesink”, “fakesink”)
  • Remove transform elements if using Jetson:
    remove: transform = Gst.ElementFactory.make(“nvegltransform”, “nvegl-transform”)
    remove: pipeline.add(transform)
  • You can actually also remove the OSD element from the pipeline since you don’t care about its output, but leaving it is also harmless.

For an alternative to on screen visual display, you can also check out the RTSP output pipeline for streaming that output for remote viewing.

5 Likes

It works! Thank you so much.

I cannot find the link to ds_pybind_0.5. I need it for deepstream 4.0.1. I cannot use Deepstream 5.0.

Hi preronamajumder,

Please help to open a new topic with more details, Thanks

How is it so difficult for Nvidia to release documentation or at least help with answering the questions effectively! After a year and still having problems!
Could someone please tell me how to get the python sample “deepstream_test_1_usb.py” not to display anything?
I just want deepstream to do the detections for me and I’m getting the bboxes from that. I do not need OSD or Sink.
Why the “fakesink” does not work?
@zhliunycm2 I tried your suggestion on the 24th Jun. Does not work! Gives me warnings and errors.
Anyone has a solution for this a year later? Thanks

2 Likes