Unable to start Yolo8 deepstream with MJpeg AVI

Please provide complete information as applicable to your setup.

Hardware Platform Jetson AGX Orin
• DeepStream 6.2
**• JetPack Version not sure, **
• Tegra: 35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023
• TensorRT Version: 8.5.2-1+cuda11.4

I try to start a deepstream yolo8 analysis of a MJPEG avi image stream. But I an error I dont understand.
I got it to work with h264 with the sample file and now I want it to work with my MJPEG.

This is my config file for h264 sample in deepstream-test1

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov8s.cfg
model-file=yolov8s.wts

model-engine-file=model_b1_gpu0_fp32.engine

#model-engine-file=model_b1_gpu0_int8.engine
#int8-calib-file=calib.table

labelfile-path=labels.txt
batch-size=1

network-mode=0
#network-mode=1

num-detected-classes=4
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Then changing the code to run a MJPEG avi file. This is the py script:

def main():
# Create the main loop
loop = GObject.MainLoop()

# Create a GStreamer pipeline
pipeline = Gst.Pipeline.new("avi-mjpeg-player")

# Create pipeline elements
source = Gst.ElementFactory.make("filesrc", "source")
if not source:
    sys.stderr.write(" Unable to create Source \n")

decode = Gst.ElementFactory.make("jpegdec", "decode")
if not decode:
    sys.stderr.write(" Unable to create decode \n")

convert = Gst.ElementFactory.make("nvvideoconvert", "convertor") 
if not convert:
    sys.stderr.write(" Unable to create nvvidconv \n")

infer = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not infer:
    sys.stderr.write(" Unable to create inference \n")

nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")    
if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")

#convert = Gst.ElementFactory.make("videoconvert", "convert")
#if not convert:
#    sys.stderr.write(" Unable to create convert \n")

sink = Gst.ElementFactory.make("xvimagesink", "nv3d-sink")        
if not sink:
    sys.stderr.write(" Unable to create sink \n")


# Set the input AVI file path
source.set_property("location", "/home/aiadmin/Development/deepstream-test1/data/output_1600_1300.avi")

Set the infer configuration file

infer.set_property("config-file-path", "dstest1_pgie_config.txt")

# Build the pipeline
pipeline.add(source)
pipeline.add(decode)        
pipeline.add(convert)
pipeline.add(infer)
pipeline.add(nvosd)
pipeline.add(sink)

source.link(decode)
decode.link(convert)
convert.link(infer)
infer.link(nvosd)
nvosd.link(sink)

# Set the pipeline to playing state
pipeline.set_state(Gst.State.PLAYING)

# Set up the bus to watch for messages
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", on_message, loop)

try:
    loop.run()
except KeyboardInterrupt:
    pass
finally:
    # Clean up
    pipeline.set_state(Gst.State.NULL)

if name == “main”:
main()

This last does not work. It will give me
0:00:05.768482887 8206 0x26ae6a0 WARN nvinfer gstnvinfer.cpp:1492:gst_nvinfer_process_full_frame: error: NvDsBatchMeta not found for input buffer. Error: NvDsBatchMeta not found for input buffer.

I see in the working example of deepstream-test1 that the sink is a nv3dsink. But that does not work either…
Can it be that the stream-mux is missing and the def osd_sink_pad_buffer_probe(pad,info,u_data):
Any ideas?

I managed to fix the pipleline so that I got the image from the video stream. but I doubt that its running any inference on it. The deepstream-app -c … is showing a box very fast, so I know it find something. The same Yolo8 model with python does not print any box, which leads me to believe that it does not create the box at all or is unable to link to the streammux and sinkpad (not sure what I talk about here).
see source code attached.

gst_test.py (7.8 KB)

You pipeline is not right. You are using avi stream, but you use jpegdec as decoder. Please refer to the demo to create uridecoderbin source.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test3/deepstream_test_3.py#L180

I understand there is something wrong, but I get the file to play in a window.
The example you present does not play the avi file. It uses nvjpegdec and that cant decode the images from the file.

Decodebin child added: nvjpegdec0

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff8afaf700 (GstCapsFeatures at 0xfffec4002ea0)>
Error: gst-stream-error-quark: Internal data stream error. (1): gstavidemux.c(5791): gst_avi_demux_loop (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstDecodeBin:decodebin0/GstAviDemux:avidemux0:
streaming stopped, reason not-negotiated (-4)
Exiting app

This is mediainfo data:

General
Complete name : output_1600_1300.avi
Format : AVI
Format/Info : Audio Video Interleave
File size : 441 MiB
Duration : 55 s 0 ms
Overall bit rate : 67.2 Mb/s

Video
ID : 0
Format : JPEG
Codec ID : MJPG
Duration : 55 s 0 ms
Bit rate : 67.2 Mb/s
Width : 1 600 pixels
Height : 1 300 pixels
Display aspect ratio : 5:4
Frame rate : 30.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Compression mode : Lossy
Bits/(Pixel*Frame) : 1.077
Stream size : 441 MiB (100%)

I went crazy and remade a video from images with h264 encoding:
The result is different ofcourse but I thought that perhaps I should have some sort of base-line.
So if this works with sample 1080p_h264.mp4 then my file should work too.
Problem is I get error again:

Error

vMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Error: Internal data stream error.

Create the main loop

loop = GObject.MainLoop()

# Create a GStreamer pipeline
pipeline = Gst.Pipeline.new("avi-mjpeg-player")

# Create pipeline elements
source = Gst.ElementFactory.make("filesrc", "source")
if not source:
    sys.stderr.write(" Unable to create Source \n")

decoder = Gst.ElementFactory.make("decodebin", "decode")
if not decoder:
    sys.stderr.write(" Unable to create decode \n")

convert = Gst.ElementFactory.make("nvvideoconvert", "convertor") 
if not convert:
    sys.stderr.write(" Unable to create nvvidconv \n")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")           

infer = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not infer:
    sys.stderr.write(" Unable to create inference \n")

nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")    
if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")

sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")        
if not sink:
    sys.stderr.write(" Unable to create sink \n")


# Set the input AVI file path
source.set_property("location", "/home/aiadmin/sample_1080p_h264.mp4")

Format

General
Complete name : /home/aiadmin/output_video.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/avc1/mp41)
File size : 68.1 MiB
Duration : 6 min 44 s
Overall bit rate : 1 411 kb/s
Writing application : Lavf58.29.100

Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L3.1
Format settings : CABAC / 4 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference frames : 4 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 6 min 44 s
Bit rate : 1 409 kb/s
Width : 1 280 pixels
Height : 720 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 30.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.051
Stream size : 68.0 MiB (100%)
Writing library : x264 core 155 r2917 0a84d98
Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=12 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=23.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00
Codec configuration box : avcC

First, avi is a container format. Jpeg, Avc or h264 are codec formats. They are different.
Could you attach the video source to us?

Thanks for you support!

As you might have noticed and tried to show was that I recreated a film from JPEGs using

ffmpeg -framerate 30 -pattern_type glob -i ‘./images/*.jpg’ -c:v libx264 -pix_fmt yuvj420p output_video.mp4

While waiting I tried to create a playable pipeline of below and saw that avdec_h264 decoder wasnt there. And I also took the time to learn more about gstreamer pipelines. So I tried this:

gst-launch-1.0 filesrc location=output_video.mp4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! avdec_h264 ! nveglglessink -e

I got it to work after finding out that gst-inspect could not find the avdec_h264 decoder. After some work I found below. So I did this:

export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1

and reloaded bash

source ~/.bashrc

then I could run the video in console with gst-launch. Problem still remains regarding the deepstream-test1.py, or I have not had time to test. I will see if I can modify the pipeline so that the input is parsed properly as describe by you.! If you make it work though with three streams I would be happy as a penguin on ice.

If I get this working I will make two very good tutorials on this matter. Both Mpeg4 streaming for evaluating models and second how to take in 3 USB-camera using v4l2src and running inference.
A reason for going with deepstream is a hope that it is more stable and faster, running inference and saving the box output, than using OpenCV.

gst-launch-1.0 filesrc location=…/output_video.mp4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! nvinfer config-file-path=dstest1_pgie_config.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink -e

Now this works with the file. I get a box to appear in the video. So this needs to convert into gst python.
Question now, I think, If I get this to work with gst-launch-1.0, i would assume that my libraries are all installed and intact, which they were not from the start, I can the proceed into making a multifilesrc or using nvurisrcbin. This however I have failed with, in combining them into a tiled view.

Any help is still appreciated since my 99% work of this goes into verifying that my installation is ok.
@yuweiw Any possibility to help.

Seond question, is it possible to get paid support from nVidia on hourly pay rate?

Could you try to use our deepstream-test3 to test your video source?

python3 deepstream_test_3.py -i file:///your_video_source

Could you provide a more detailed description of the newest issues you have encountered?

Hi,

thank you for answering.
Yes I managed to play one and two video streams using that. It did parse properly now finally. What I did not see using my file was boxes on any frame. I hope that is due to what its trained on. When trying another mp4 file from sample-streams I get the boxes.

When I tried deepstream-test1.py and the model for that sample I got at-least one box on my output_video.mp4. So now I wonder if the model actually runs on my video at all or cant draw them?
I would expect something to be picked up with low confidence. I will check with my yolo8 model using test3.

@yuweiw: I will try to be detailed:
(Summary: it works with 1output_vodeo.mp4 and YoloV8, but not with 2 files for inference, It works with two files and the original config file using resnet…)

deepstream_python_apps/apps/deepstream-test1

This contains a modification of the config filefor Yolo8 and to run inference on a sample h264 file.
This works and contains:
yolov8s.cfg
yolov8s.wts
model_b1_gpu0_fp32.engine
labels.txt
Also a folder called nvdsinfer_custom_impl_Yolo from the DeepStream_yolo8 impl. by

GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

And the config file looks like this:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov8s.cfg
model-file=yolov8s.wts

model-engine-file=model_b1_gpu0_fp32.engine

#model-engine-file=model_b1_gpu0_int8.engine
#int8-calib-file=calib.table

labelfile-path=labels.txt
batch-size=1

network-mode=0
#network-mode=1

num-detected-classes=4
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

When running this I get the boxes on the image.

Now the part which does not work. I take the files mentioned above and move to

deepstream_python_apps/apps/deepstream-test3

Running with the previously working mp4 I created from images (or the h264 sample file) it does not start the video. I freezes. See output:

python3 deepstream_test_3.py -i …/output_video.mp4
{‘input’: [‘…/output_video.mp4’], ‘configfile’: None, ‘pgie’: None, ‘no_display’: False, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Creating nv3dsink

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
0 : …/output_video.mp4
Starting pipeline
WARNING: Deserialize engine failed because file path: /home/aiadmin/Development/deepstream-test3/model_b1_gpu0_fp32.engine open error
0:00:03.548913698 5712 0x2d9a0550 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/aiadmin/Development/deepstream-test3/model_b1_gpu0_fp32.engine failed
0:00:03.753524153 5712 0x2d9a0550 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/aiadmin/Development/deepstream-test3/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:03.753666138 5712 0x2d9a0550 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.

Loading pre-trained weights
Loading weights of yolov8s complete
Total weights read: 11211776
Building YOLO network

    Layer                         Input Shape         Output Shape        WeightPtr

(0) conv_silu [3, 640, 640] [32, 320, 320] 992
(1) conv_silu [32, 320, 320] [64, 160, 160] 19680
(2) conv_silu [64, 160, 160] [64, 160, 160] 24032
(3) c2f_silu [64, 160, 160] [96, 160, 160] 42720
(4) conv_silu [96, 160, 160] [64, 160, 160] 49120
(5) conv_silu [64, 160, 160] [128, 80, 80] 123360
(6) conv_silu [128, 80, 80] [128, 80, 80] 140256
(7) c2f_silu [128, 80, 80] [256, 80, 80] 288736
(8) conv_silu [256, 80, 80] [128, 80, 80] 322016
(9) conv_silu [128, 80, 80] [256, 40, 40] 617952
(10) conv_silu [256, 40, 40] [256, 40, 40] 684512
(11) c2f_silu [256, 40, 40] [512, 40, 40] 1276384
(12) conv_silu [512, 40, 40] [256, 40, 40] 1408480
(13) conv_silu [256, 40, 40] [512, 20, 20] 2590176
(14) conv_silu [512, 20, 20] [512, 20, 20] 2854368
(15) c2f_silu [512, 20, 20] [768, 20, 20] 4036064
(16) conv_silu [768, 20, 20] [512, 20, 20] 4431328
(17) conv_silu [512, 20, 20] [256, 20, 20] 4563424
(18) maxpool [256, 20, 20] [256, 20, 20] -
(19) maxpool [256, 20, 20] [256, 20, 20] -
(20) maxpool [256, 20, 20] [256, 20, 20] -
(21) route: 17, 18, 19, 20 - [1024, 20, 20] -
(22) conv_silu [1024, 20, 20] [512, 20, 20] 5089760
(23) upsample [512, 20, 20] [512, 40, 40] -
(24) route: 23, 12 - [768, 40, 40] -
(25) conv_silu [768, 40, 40] [256, 40, 40] 5287392
(26) c2f_silu [256, 40, 40] [384, 40, 40] 5583328
(27) conv_silu [384, 40, 40] [256, 40, 40] 5682656
(28) upsample [256, 40, 40] [256, 80, 80] -
(29) route: 28, 8 - [384, 80, 80] -
(30) conv_silu [384, 80, 80] [128, 80, 80] 5732320
(31) c2f_silu [128, 80, 80] [192, 80, 80] 5806560
(32) conv_silu [192, 80, 80] [128, 80, 80] 5831648
(33) conv_silu [128, 80, 80] [128, 40, 40] 5979616
(34) route: 33, 27 - [384, 40, 40] -
(35) conv_silu [384, 40, 40] [256, 40, 40] 6078944
(36) c2f_silu [256, 40, 40] [384, 40, 40] 6374880
(37) conv_silu [384, 40, 40] [256, 40, 40] 6474208
(38) conv_silu [256, 40, 40] [256, 20, 20] 7065056
(39) route: 38, 22 - [768, 20, 20] -
(40) conv_silu [768, 20, 20] [512, 20, 20] 7460320
(41) c2f_silu [512, 20, 20] [768, 20, 20] 8642016
(42) conv_silu [768, 20, 20] [512, 20, 20] 9037280
(43) route: 32 - [128, 80, 80] -
(44) conv_silu [128, 80, 80] [128, 80, 80] 9185248
(45) conv_silu [128, 80, 80] [128, 80, 80] 9333216
(46) conv_linear [128, 80, 80] [80, 80, 80] 9343536
(47) route: 43 - [128, 80, 80] -
(48) conv_silu [128, 80, 80] [64, 80, 80] 9417520
(49) conv_silu [64, 80, 80] [64, 80, 80] 9454640
(50) conv_linear [64, 80, 80] [64, 80, 80] 9458800
(51) route: 50, 46 - [144, 80, 80] -
(52) shuffle [144, 80, 80] [144, 6400] -
(53) route: 37 - [256, 40, 40] -
(54) conv_silu [256, 40, 40] [128, 40, 40] 9754224
(55) conv_silu [128, 40, 40] [128, 40, 40] 9902192
(56) conv_linear [128, 40, 40] [80, 40, 40] 9912512
(57) route: 53 - [256, 40, 40] -
(58) conv_silu [256, 40, 40] [64, 40, 40] 10060224
(59) conv_silu [64, 40, 40] [64, 40, 40] 10097344
(60) conv_linear [64, 40, 40] [64, 40, 40] 10101504
(61) route: 60, 56 - [144, 40, 40] -
(62) shuffle [144, 40, 40] [144, 1600] -
(63) route: 42 - [512, 20, 20] -
(64) conv_silu [512, 20, 20] [128, 20, 20] 10691840
(65) conv_silu [128, 20, 20] [128, 20, 20] 10839808
(66) conv_linear [128, 20, 20] [80, 20, 20] 10850128
(67) route: 63 - [512, 20, 20] -
(68) conv_silu [512, 20, 20] [64, 20, 20] 11145296
(69) conv_silu [64, 20, 20] [64, 20, 20] 11182416
(70) conv_linear [64, 20, 20] [64, 20, 20] 11186576
(71) route: 70, 66 - [144, 20, 20] -
(72) shuffle [144, 20, 20] [144, 400] -
(73) route: 52, 62, 72 - [144, 8400] -
(74) detect_v8 [144, 8400] [8400, 84] 11211776

Output YOLO blob names:
detect_v8_75

Total number of YOLO layers: 299

Building YOLO network complete
Building the TensorRT Engine

NOTE: letter_box is set in cfg file, make sure to set maintain-aspect-ratio=1 in config_infer file to get better accuracy

This is what I get.

I got this working with my files (output_video.mp4 with yolov8), updating with more classes.
BUT running multiple media files will lock it up.

1 file works fine with yoloV8 inference
2 files does not with the start string:

python3 deepstream_test_3.py -i file:///your_video_source file:///your_video_source

The current situation is as follows, is that right?
1.It’s OK to just run our demo without any change.
2.When using your yolov8 model, it cannot detect the bbox.
3.When use more than 1 source with your yolov8 model, it gets stuck.
This seems to be a problem with your own model. Please generate the onnx model by referring to the YOLOv8.md and try again.

1 Like
  1. I can run the program with 1 video without change of the config file (I can not see any box, but no errors/fails)
  2. I can run the program with 2 videos without change of the config file (I can not see any box, but no errors/fails)
  3. I can run the program with my video with YoloV8 (1 file) It detects boxes
  4. I can NOT run the program with 2 videos with YoloV8 then it freezes

I will generate the ONNX model acc. to the link.

Tried it and got all to install properly.
Running the deepstream-test3 with my output_video.mp4 and config_infer_primary_yoloV8.txt config file.
I copy and use also the labels.txt and the yoloV8s.onnx.
(the onnx export works also).
I run this with 1 file = no problem
I run this with 2 file = Error

Start string:

python3 deepstream_test_3.py -i file:///home/xxx/Development/output_video.mp4 file:///home/xxx/Development/output_video.mp4

ErrorMesage:

Now playing…
0 : file:///home/aiadmin/Development/output_video.mp4
1 : file:///home/aiadmin/Development/output_video.mp4
Starting pipeline

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Deserialize yoloLayer plugin: yolo
0:00:05.149844034 12542 0x18e5ae90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/aiadmin/Development/deepstream-test3/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT num_detections 1
2 OUTPUT kFLOAT detection_boxes 8400x4
3 OUTPUT kFLOAT detection_scores 8400
4 OUTPUT kFLOAT detection_classes 8400

0:00:05.361970720 12542 0x18e5ae90 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1841> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:05.362035233 12542 0x18e5ae90 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2018> [UID = 1]: deserialized backend context :/home/aiadmin/Development/deepstream-test3/model_b1_gpu0_fp32.engine failed to match config params, trying rebuild
0:00:05.380831253 12542 0x18e5ae90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
YOLO config file or weights file is not specified

ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:07.904735174 12542 0x18e5ae90 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:08.124524762 12542 0x18e5ae90 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:08.124592091 12542 0x18e5ae90 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:08.124639035 12542 0x18e5ae90 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:08.124654300 12542 0x18e5ae90 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(888): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

Config File:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8s.onnx
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Problem is solved

  1. I converted my yolo8 model to onnx format.
  2. I made sure to remove the engine files so they would be recreated
  3. I updated the library from ultralyctics and marcoslucianops. I might have had an old version not supporting onnx
  4. Making sure that force-implicit-batch-dim=1 is NOT active (I tried it with on). So it should be

#force-implicit-batch-dim=1, in the config file

Only problem I have now is that it seems that it re-create the engine file everytime I start. It waits long time at TensorRT loading…

Problem was solved. I was referencing wrong engine file in the config file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.