Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) T4
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) i’ve installed deepstream:7.0-samples-multiarch dcoker from documentation
• Training spec file(If have, please share here) don’t needed
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I’ve used deepstream:7.0-samples-multiarch and trained yolo_v4 with default dataset and get onxx file with config file and then I add some kind of neccerly thing like onnx-file and lablepath-file and gpu-id and unique-gie or something like that to configuration file but when it comes to deploying it to my docker and do some stuff that said in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (branch ds7.0 using prebuild tensorrt 8.6.2 that he talked about but my docker dependencies was different, by the way i add deepstream python binding to it using deepstream 7.0 installarion guide) I got error that said incorrect output it seems like it can’t understand yolo_v4 output
both of them stuck after after showing this messages :
warning before creating engine (it stuck here) :
error after clossing and use cached engine that created from number one:
could not find output coverage layer for parsing object
Failed to parse bboxes
could you share the whole log? At the the first run, the app will take much time to generate TRT engine.
About “ould not find output coverage layer for parsing object”, if parse-bbox-func-name and custom-lib-path are not set in nvinfer cfg, the app will use resnet postprocessing function to process the inference results. please refer to this cfg and code to modify the code for your own model.
i’m using trained model from yolo_v4 does it need to change cfg? i don’t have any so file i only have generated .onxx and config file from tao deploy with some changed that i add to it somthings like gpu-id ,unique_gie,onxxfilepath and label file
the default postporcessing function is for resnet model, not for yolov4. what are the output layers of your model? please check if the code above is applicable to your model. if not, please modify the code to customize. building post_processor will create an so library.
As the comment shown in nvdsinfer_context_impl_output_parsing.cpp, the postprocssing funtion is for resnet10, whose layer names include bbox or cov. please refer to deepstream-test1. the model in deepstream-test1 does not need custom postprocessing function.
DeepStream uses TensorRT to do inference. you don’t need prebuild tensoross because the container already include TensorRT lib. the model in deepstream-test3 does not need custom postprocessing function as well. based on my analysis above, please do the following checkings.
what are the output layers of your model? you can use Netron to check. plese check if the output names include bbox or cov.
if not in the step1, you need to use custom postprocessing. please check if this code is applicable to your model. this code is opensource.
if not in the step2, please modify NvDsInferParseCustomBatchedNMSTLT to customize for your model.
so i will do what you said but before doing that from your analysis did you mean i don’t need tensoross pr so lib for yolo_v4 trained model from TAO to deploy it in deepstream?
ok i try both default deepstream-test-3 model and my own Tao model aand it works well in deepstream-test3 model but when it comes to my own model in deepstream-test3 it stuck very long time , i close it because it takes more than 20 min
Only at the first run, it will take a long time to generate TRT engine. after the first run, you can set model-engine-file to the path of the generated engine in the first run. then the app will load the engine directly instead of generating a new engine.
From the link, I did not see tensoross is needed in DS docker. could you share the texts. In DeepStream docker doc, tensoross is not needed to install.
netron is a third-party tool. it can open onnx model.
As the doc " As of 5.0.0, tao model converter is deprecated. This method may not be available in the future releases." shown, you can use this cmd to check if the engine can be generated without tensoross .
if the engine was generated, please modify cfg and NvDsInferParseCustomBatchedNMSTLT to customize. you can add logs in NvDsInferParseCustomBatchedNMSTLT to debug.
and for your respond i use "tao model yolo_v4 export … " command as said in documents and ipynb samples to create onnx file and this is the thing that it said in the samples
but in the yolo document (in the website that i sent you as an image) saied tensoross is neseccerly/: