DeepStream Python Apps and Bindings question

hen I run this

python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_1080p_h264.mp4

It returns this:

Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_1080p_h264.mp4 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 


Using winsys: x11 
Opening in BLOCKING MODE 
Creating LL OSD context new
Error: gst-stream-error-quark: Internal data stream error. (1): gstbaseparse.c(3611): gst_base_parse_loop (): /GstPipeline:pipeline0/GstH264Parse:h264-parser:
streaming stopped, reason not-negotiated (-4)

the code generates the engine but when it trys to run video it faults,

Thanks for trying the Python apps. Test apps 1,2 and 4 only work with H264 elementary streams such as sample_720p.h264. The test3 app works with mp4 container.

Thanks That works.
Also just to see if it worked I pointed “dstest1_pgie_config.txt” at the “Primary_Detector_Nano”
That worked also. It ran at around 32 FPS. Not Bad.
Is there a way to use webcam?

Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
labelfile-path=../../../../samples/models/Primary_Detector_Nano/labels.txt
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1

[class-attrs-all]
threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
#[class-attrs-2]
#threshold=0.6
#eps=0.5
#group-threshold=3
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800

For USB webcam support, please see sample code from https://devtalk.nvidia.com/default/topic/1066912/deepstream-sdk/deepstream-now-supports-python-/post/5408100/#5408100

Awesome that gets the webcam working.
It only runs at about 3 FPS. Does it just need some experimenting to get a faster FPS?

@adventuredaisy, to negotiate a particular framerate (say 30 frames per second), we need to add a GStreamer capsfilter plugin after v4l2src with property “framerate=30/1”.
You could check the framerate supported by your UVC device (say at /dev/video0) with:

v4l2-ctl --all -d /dev/video0

Have added this capsfilter for “framerate=30/1” in the code-snippet shared with:
https://devtalk.nvidia.com/default/topic/1066912/deepstream-sdk/deepstream-now-supports-python-/post/5408100/#5408100

That work nicely
30 FPS
thank you

I added the caps_filter to the deepstream_test1.py but i dont get any output after :
Creating Pipeline

Creating Source

creating caps_filter

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file …/…/…/…/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Creating LL OSD context new
0:00:00.277747402 10080 0xa605cc0 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:05.551702831 10080 0xa605cc0 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine

The modified code for main file is :

def main(args):
# Check input arguments
if len(args) != 2:
    sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
    sys.exit(1)

# Standard GStreamer initialization
GObject.threads_init()
Gst.init(None)

# Create gstreamer elements
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")

# Source element for reading from the file
print("Creating Source \n ")
source = Gst.ElementFactory.make("filesrc", "file-source")
if not source:
    sys.stderr.write(" Unable to create Source \n")

print("creating caps_filter \n")
caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
if not caps_v4l2src:
    sys.stderr.write(" Unable to create v4l2src capsfilter \n")

# Since the data format in the input file is elementary h264 stream,
# we need a h264parser
print("Creating H264Parser \n")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
if not h264parser:
    sys.stderr.write(" Unable to create h264 parser \n")

# Use nvdec_h264 for hardware accelerated decode on GPU
print("Creating Decoder \n")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
if not decoder:
    sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")

# Use nvinfer to run inferencing on decoder's output,
# behaviour of inferencing is set through config file
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")

# Use convertor to convert from NV12 to RGBA as required by nvosd
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not nvvidconv:
    sys.stderr.write(" Unable to create nvvidconv \n")

# Create OSD to draw on the converted RGBA buffer
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")

# Finally render the osd output
if is_aarch64():
    transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

print("Creating EGLSink \n")
sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
if not sink:
    sys.stderr.write(" Unable to create egl sink \n")

caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=20/1"))

print("Playing file %s " %args[1])
source.set_property('location', args[1])
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', 1)
streammux.set_property('batched-push-timeout', 4000000)
pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

print("Adding elements to Pipeline \n")
pipeline.add(source)
pipeline.add(caps_v4l2src)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(sink)
if is_aarch64():
    pipeline.add(transform)

# we link the elements together
# file-source -> h264-parser -> nvh264-decoder ->
# nvinfer -> nvvidconv -> nvosd -> video-renderer
print("Linking elements in the Pipeline \n")
source.link(caps_v4l2src)
caps_v4l2src.link(h264parser)
h264parser.link(decoder)

sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
    sys.stderr.write(" Unable to get the sink pad of streammux \n")
srcpad = decoder.get_static_pad("src")
if not srcpad:
    sys.stderr.write(" Unable to get source pad of decoder \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(nvosd)
if is_aarch64():
    nvosd.link(transform)
    transform.link(sink)
else:
    nvosd.link(sink)

# create an event loop and feed gstreamer bus mesages to it
loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)

# Lets add probe to get informed of the meta data generated, we add probe to
# the sink pad of the osd element, since by that time, the buffer would have
# had got all the metadata.
osdsinkpad = nvosd.get_static_pad("sink")
if not osdsinkpad:
    sys.stderr.write(" Unable to get sink pad of nvosd \n")

osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

# start play back and listen to events
print("Starting pipeline \n")
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
pipeline.set_state(Gst.State.NULL)