@ersheng thanks! But my question was mainly regarding yolov5 compatibility which was released recently!
YoloV5 may have similar problems too.
However, we have not thoroughly studied compatibilities of YoloV5 yet.
We may add YoloV5 into our agenda soon.
Hi @ersheng. Since, DeepStream supports TensorRT and we implemented the cuda kernel for yolov5 which works fine in TensorRT. Why is that cuda kernel not working in DeepStream when DS is using the same TRT. I mean what exactly is causing the problem because here @CJR says that it should work in DS. Any thoughts on this?
Highest Yolo version the cuda kernel in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/ can support is YoloV3.
We are trying to embed Yolo layer into tensorRT engine before deploying to DeepStream, which would cause Yolo cuda kernel in DeepStream no longer to be used. You can have a look at my previous post here: YoloV4 Solution.
YoloV5 may have a similar problem and we will work on it applying the same solution. But you can also imitate this YoloV4 solution to solve your YoloV5 problem by yourself.
@ersheng this might be a dumb question!! I understand that Highest Yolo version the cuda kernel in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/ can support is YoloV3. BUt I am not using that kernel to implement yolov5 but a different kernel. So, even a different implementation of cuda kernel that works for yolov5 in TRT would not work in DeepStream is that what you are trying to say?
However, I can give you my suggestions which follows a different workflow:
Pytorch --> ONNX --> TRT
And conversion to ONNX first is a more standardized way to handle YoloV5 from the official page: https://github.com/ultralytics/yolov5.
You can choose either way to solve your problem and I hope they do not clash with each other.
@ersheng I’ll try it both ways since @CJR is busy/unavailable currently. I’ll go with Pytorch -> Onnx -> TRT approach. It would be great if you can help out with the custom parsing functions and config files for smooth implementation of yolov5 in TRT!
@ersheng Thanks a lot. I try this way and it works!
But there seems have some wrong about results.
And it returns warning info.
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Explicit batch network detected and batch size specified, use enqueue without batch size instead.
I change the input size to width=320 height=512
And get onnx from Darknet but not pytorch. And set batchsize=1 using this command:
python demo_darknet2onnx.py yolov4.cfg yolov4.weights ./data/dog.jpg 1
trtexec --onnx=yolov4_1_3_512_320.onnx --explicitBatch --saveEngine=yolov4_1_3_320_512_fp16.engine --workspace=4096 --fp16
When I set batchsize=4, it gives errors and quit. Does the batchsize have be 1 and input size 320*512? Must I use the Pytorch model? Can the workflow be darknet -> ONNX -> TensoRT?
For the warning
I agree that this warning is annoying but you can now simply ignore it.
It is a historical remaining issue caused by backward compatibility to Caffe and Uff models.
It will be removed in later TensorRT verisons.
For the error
In which step the program quit with error? As I know batch size should be consistent in the workflow: ONNX -> TRT -> DS pipeline:
batchsize=4 batchsize=4 batchsize=4 Darknet -> ONNX -> TensorRT -> DS pipeline
Have you configured batch size of both
For ratio of input
I think the model input ratio should agree with the original image ratio, or at least close to each other.
For example, if your image input is 1080 * 1920, 320 * 512 or 320 * 608 may be a good ratio;
if your image input is 1280 * 1280, then 416 * 416 or 512 * 512 or 608 * 608 may be recommended for the model.
There is an argument named
maintain-aspect-ratio=1, the image will get padded to make its ratio consistent with model input, otherwise, the image will get stretched vertically or horizontally if image ratio does not meet model input.
DarkNet or Pytorch
Convert from darknet to onnx if you just want to use the YoloV4 official pretrained model.
Convert from pytorch to onnx if you want to use the model trained by Pytorch.
Hi @jiejing_ma @ersheng
I have implemented Yolov3 with deepstream,but I had a failed attempt with Yolov4.
Can you please share your workflow, and some links which you have referred.
I wish to reproduce your results, [the results you have obtained in the screenshot shared]
I wish to reproduce these . Please help me with a summary or workflow or reference links,
Follow this guidline:
YoloV4: DarkNet or Pytorch -> ONNX -> TensorRT -> DeepStream
trtexec --onnx=yolov4_1_3_608_608.onnx --explicitBatch --saveEngine=yolov4_1_3_608_608_fp16.engine --workspace=4096 --fp16
I get the following error at the end:
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:12: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:14: Message type “onnx2trt_onnx.ModelProto” has no field named “pytorch”.
Failed to parse ONNX model from fileyolov4_1_3_608_608.onnx
[07/09/2020-10:16:02] [E] [TRT] Network must have at least one output
[07/09/2020-10:16:02] [E] [TRT] Network validation failed.
[07/09/2020-10:16:02] [E] Engine creation failed
[07/09/2020-10:16:02] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=yolov4_1_3_608_608.onnx --explicitBatch --saveEngine=yolov4_1_3_608_608_fp16.engine --workspace=4096 --fp16
What versions of Pytorch and TensorRT are you using?
I did configure batch size of both [streammux] and [primary-gie]. I will try agine and upload the error info later. Thanks
Pytorch 1.4.0 for TensorRT 7.0 and higher
Pytorch 1.5.0 and 1.6.0 for TensorRT 7.1.2 and higher
TensorRT version Recommended: 7.0, 7.1
@ersheng Followed your yolov4 repo to make TRT engine for yolov5 which was built successfully. Compared the output with pytorch mode and they are both same. But when I hook it up in DeepStream I am not getting any boxes. I have uploaded the code and relevant files here. Let me know if you have any pointers!
Please help to open a new topic for your issue. Thanks