Deepstream Python app without monitor for edge computing

Is it possible to run a Deepstream app (written in Python) without a monitor if I only want to save the output to disk? The examples on GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications all appear to require a display connected to the Jetson Orin unit in order to run.

I already have a trained model exported as a TensorRT .engine file and I took the “Building Real-Time Video AI Applications” course but this wasn’t covered at all.

Yes. You can refer to the FAQ to save that stream.

Thanks for pointing me to that. It might be worth adding a full example to the repo considering the diff in that old forum post doesn’t match the current version of the sample app and it’s not clear which version it’s referring to.

In any case, I tried following that post and I’m still getting an error:

0:00:05.764151739 3766158     0x130b9e70 WARN               v4l2 v4l2_calls.c:637:gst_v4l2_open:<nvvideo-h264enc>
 error: Cannot identify device '/dev/v4l2-nvenc'.                                                                     
0:00:05.764244606 3766158     0x130b9e70 WARN                    v4l2 v4l2_calls.c:637:gst_v4l2_open:<nvvideo-h264enc>
 error: system error: No such file or directory                                                                       
0:00:05.764316864 3766158     0x130b9e70 WARN            videoencoder gstvideoencoder.c:1636:gst_video_encoder_change_state:<nvvideo-h264enc> error: Failed to open encoder                                                                 
Error: gst-resource-error-quark: Cannot identify device '/dev/v4l2-nvenc'. (3): /dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/v4l2_calls.c(637): gst_v4l2_open (): /GstPipeline:pipeline0/nvv4l2h264enc:nvvideo-h264enc:  system error: No such file or directory

This has little to do with the version. You can focus on how to save the stream to the file in the following section: `if args[2] == ‘1’: for deepstream-test1.
Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform: Jetson Orin
Deepstream Version: 6.2
JetPack Version: 5.1.1
TensorRT Version: 8.5.2

Here is a minimal example that doesn’t even run a model, just copies the video to a new file and this produces the error I posted yesterday:

import sys
import os
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst

import pyds


def bus_call(bus, message, loop):
    t = message.type
    if t == Gst.MessageType.EOS:
        sys.stdout.write("End-of-stream\n")
        loop.quit()
    elif t==Gst.MessageType.WARNING:
        err, debug = message.parse_warning()
        sys.stderr.write("Warning: %s: %s\n" % (err, debug))
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        sys.stderr.write("Error: %s: %s\n" % (err, debug))
        loop.quit()
    return True


def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline")
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    nvvidconv1 = Gst.ElementFactory.make ("nvvideoconvert", "nvvid-converter1")
    if not nvvidconv1:
        sys.stderr.write("Unable to create nvvidconv1")
    capfilt = Gst.ElementFactory.make ("capsfilter", "nvvideo-caps")
    if not capfilt:
        sys.stderr.write("Unable to create capfilt")
    caps = Gst.caps_from_string ('video/x-raw(memory:NVMM), format=I420')
    capfilt.set_property('caps', caps)
    print("Creating nvv4l2h264enc")
    nvh264enc = Gst.ElementFactory.make ("nvv4l2h264enc" ,"nvvideo-h264enc")
    if not nvh264enc:
        sys.stderr.write("Unable to create nvh264enc")
    print("Creating filesink")    
    sink = Gst.ElementFactory.make ("filesink", "nvvideo-renderer")
    sink.set_property('location', '_dscopy'.join(os.path.splitext(args[1])))
    if not sink:
        sys.stderr.write("Unable to create filesink")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    print("Adding elements to Pipeline")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv1)
    pipeline.add(capfilt)
    pipeline.add(nvh264enc)
    pipeline.add(sink)
    
    print("Linking elements in the Pipeline")
    source.link(h264parser)
    h264parser.link(decoder)
    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(nvvidconv1)
    nvvidconv1.link(capfilt)
    capfilt.link(nvh264enc)
    nvh264enc.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    # osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)


if __name__ == '__main__':
    sys.exit(main(sys.argv))

@yuweiw any idea what is causing that error?

It could be your device that’s causing the problem. You can refer to the How to connect a USB camera in DeepStream first.

I’m not using a camera currently. I’m trying to run a sample app that loads a video from a local file (h264).

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Maybe your device doesn’t have a hardware encoder.

@yuweiw I already filled in those details a few posts back. It’s a Jetson Orin Nano

Jetson Orin Nano cannot support hardware encoding. You can use the x264enc to encode the stream.

I am now getting the errors that it cannot create the following objects in that script:

Unable to create NvStreamMux
Unable to create nvvidconv
Unable to create nvosd

Any idea what could be causing this? Or is there an example somewhere (ideally that includes a Docker file) that I could refer to that works for a Jetson Orin Nano without a monitor attached?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Did you follow these steps to configure your board? jetson-setup.

All of our official Dockers are in the following directory deepstream.

I have attached how to save the stream to a file FAQ.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.