Please provide complete information as applicable to your setup.
Hardware Platform Jetson AGX Orin
• DeepStream 6.2
**• JetPack Version not sure, **
• Tegra: 35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023•
• TensorRT Version: 8.5.2-1+cuda11.4
I try to start a deepstream yolo8 analysis of a MJPEG avi image stream. But I an error I dont understand.
I got it to work with h264 with the sample file and now I want it to work with my MJPEG.
This is my config file for h264 sample in deepstream-test1
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov8s.cfg
model-file=yolov8s.wtsmodel-engine-file=model_b1_gpu0_fp32.engine
#model-engine-file=model_b1_gpu0_int8.engine
#int8-calib-file=calib.tablelabelfile-path=labels.txt
batch-size=1network-mode=0
#network-mode=1num-detected-classes=4
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
Then changing the code to run a MJPEG avi file. This is the py script:
def main():
# Create the main loop
loop = GObject.MainLoop()# Create a GStreamer pipeline pipeline = Gst.Pipeline.new("avi-mjpeg-player") # Create pipeline elements source = Gst.ElementFactory.make("filesrc", "source") if not source: sys.stderr.write(" Unable to create Source \n") decode = Gst.ElementFactory.make("jpegdec", "decode") if not decode: sys.stderr.write(" Unable to create decode \n") convert = Gst.ElementFactory.make("nvvideoconvert", "convertor") if not convert: sys.stderr.write(" Unable to create nvvidconv \n") infer = Gst.ElementFactory.make("nvinfer", "primary-inference") if not infer: sys.stderr.write(" Unable to create inference \n") nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay") if not nvosd: sys.stderr.write(" Unable to create nvosd \n") #convert = Gst.ElementFactory.make("videoconvert", "convert") #if not convert: # sys.stderr.write(" Unable to create convert \n") sink = Gst.ElementFactory.make("xvimagesink", "nv3d-sink") if not sink: sys.stderr.write(" Unable to create sink \n") # Set the input AVI file path source.set_property("location", "/home/aiadmin/Development/deepstream-test1/data/output_1600_1300.avi")
Set the infer configuration file
infer.set_property("config-file-path", "dstest1_pgie_config.txt") # Build the pipeline pipeline.add(source) pipeline.add(decode) pipeline.add(convert) pipeline.add(infer) pipeline.add(nvosd) pipeline.add(sink) source.link(decode) decode.link(convert) convert.link(infer) infer.link(nvosd) nvosd.link(sink) # Set the pipeline to playing state pipeline.set_state(Gst.State.PLAYING) # Set up the bus to watch for messages bus = pipeline.get_bus() bus.add_signal_watch() bus.connect("message", on_message, loop) try: loop.run() except KeyboardInterrupt: pass finally: # Clean up pipeline.set_state(Gst.State.NULL)
if name == “main”:
main()
This last does not work. It will give me
0:00:05.768482887 8206 0x26ae6a0 WARN nvinfer gstnvinfer.cpp:1492:gst_nvinfer_process_full_frame: error: NvDsBatchMeta not found for input buffer. Error: NvDsBatchMeta not found for input buffer.
I see in the working example of deepstream-test1 that the sink is a nv3dsink. But that does not work either…
Can it be that the stream-mux is missing and the def osd_sink_pad_buffer_probe(pad,info,u_data):
Any ideas?