Deploy yolo_v4 to Deepstream 7.0

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) T4
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) i’ve installed deepstream:7.0-samples-multiarch dcoker from documentation
• Training spec file(If have, please share here) don’t needed
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I’ve used deepstream:7.0-samples-multiarch and trained yolo_v4 with default dataset and get onxx file with config file and then I add some kind of neccerly thing like onnx-file and lablepath-file and gpu-id and unique-gie or something like that to configuration file but when it comes to deploying it to my docker and do some stuff that said in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (branch ds7.0 using prebuild tensorrt 8.6.2 that he talked about but my docker dependencies was different, by the way i add deepstream python binding to it using deepstream 7.0 installarion guide) I got error that said incorrect output it seems like it can’t understand yolo_v4 output
both of them stuck after after showing this messages :
warning before creating engine (it stuck here) :

error after clossing and use cached engine that created from number one:
could not find output coverage layer for parsing object
Failed to parse bboxes

another interesting thing is i didn’t change anything but for now it didn’t create engine file from anymore

The error is related to Deepstream. Moving to Deepstream forum for tracking.

1 Like

i event try to build tensoross v8.6 but i’ve gotten the same errors and warning!

  1. could you share the whole log? At the the first run, the app will take much time to generate TRT engine.
  2. About “ould not find output coverage layer for parsing object”, if parse-bbox-func-name and custom-lib-path are not set in nvinfer cfg, the app will use resnet postprocessing function to process the inference results. please refer to this cfg and code to modify the code for your own model.
1 Like

i’m using trained model from yolo_v4 does it need to change cfg? i don’t have any so file i only have generated .onxx and config file from tao deploy with some changed that i add to it somthings like gpu-id ,unique_gie,onxxfilepath and label file

the default postporcessing function is for resnet model, not for yolov4. what are the output layers of your model? please check if the code above is applicable to your model. if not, please modify the code to customize. building post_processor will create an so library.

i’m using TAO yolo_v4 with resnet18 backbone did i need to do this? can you give me a documentation for this step

As the comment shown in nvdsinfer_context_impl_output_parsing.cpp, the postprocssing funtion is for resnet10, whose layer names include bbox or cov. please refer to deepstream-test1. the model in deepstream-test1 does not need custom postprocessing function.

my issue is tensoross ! i can’t even run deepstream-test3 after using prebuild tensoross

DeepStream uses TensorRT to do inference. you don’t need prebuild tensoross because the container already include TensorRT lib. the model in deepstream-test3 does not need custom postprocessing function as well. based on my analysis above, please do the following checkings.

  1. what are the output layers of your model? you can use Netron to check. plese check if the output names include bbox or cov.
  2. if not in the step1, you need to use custom postprocessing. please check if this code is applicable to your model. this code is opensource.
  3. if not in the step2, please modify NvDsInferParseCustomBatchedNMSTLT to customize for your model.

so i will do what you said but before doing that from your analysis did you mean i don’t need tensoross pr so lib for yolo_v4 trained model from TAO to deploy it in deepstream?

tensoross is not needed in Deepstream docker container. You may run deepstream-test1 directly.

ok i try both default deepstream-test-3 model and my own Tao model aand it works well in deepstream-test3 model but when it comes to my own model in deepstream-test3 it stuck very long time , i close it because it takes more than 20 min

so it seems like have have to create so file for it respect to you measages but i have two question:
i saw a tao documentation and have a question !
first, please see this documentation https://docs.nvidia.com/tao/tao-toolkit/text/cv_finetuning/tensorflow_1/object_detection/yolo_v4.html#deploying-to-deepstream did you really think i didn’t need tensoross in docker ?

second , can you give me a guidence for netron taht you said

  1. Only at the first run, it will take a long time to generate TRT engine. after the first run, you can set model-engine-file to the path of the generated engine in the first run. then the app will load the engine directly instead of generating a new engine.
  2. From the link, I did not see tensoross is needed in DS docker. could you share the texts. In DeepStream docker doc, tensoross is not needed to install.
  3. netron is a third-party tool. it can open onnx model.




it didn’t say we have or we don’t have tensoross in docker but it said it’s requierd!

As the doc " As of 5.0.0, tao model converter is deprecated. This method may not be available in the future releases." shown, you can use this cmd to check if the engine can be generated without tensoross .

this is the error after engine file created

  1. could you use gdb to get a crash stack?
  2. if the engine was generated, please modify cfg and NvDsInferParseCustomBatchedNMSTLT to customize. you can add logs in NvDsInferParseCustomBatchedNMSTLT to debug.

and for your respond i use "tao model yolo_v4 export … " command as said in documents and ipynb samples to create onnx file and this is the thing that it said in the samples

but in the yolo document (in the website that i sent you as an image) saied tensoross is neseccerly/: