Generated engine YOLOv4 from Darknet in Deepstream can not run properly in TensorRT Python API?

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks

In Deesptream it is ok. But in TensorRT Python API it seem not work properly.

based on the checking result above, this Darknet yolov4 model dose not support dynamic batch. execute_async_v2 is for “no implicit batch”, so only execute_async is available for this model.

based on the checking result above, this Darknet yolov4 model dose not support dynamic batch. execute_async_v2 is for “no implicit batch”, so only execute_async is available for this model.

Yead, it is true. And I am using this method execute_async(). But my issues is output of engine model when running with this method is abnormal (random allocated values) as above.

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.

  1. noticing master code works fine, you can use maser code.
  2. noticing there are big change between two versions, especially the model engine generating function, you can check the code and consult the author because it is a custom code. and please refer to deepstream yolov4: yolo_deepstream, YoloV4+dspreprocess.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.