Description
Hi. In official Yolov5 documentation it is defined how to export a Pytorch model (.pt) into different formats for deployments (i.e. Jetson inference).
They say to follow Deepstream-Yolo from Marcos Luciano in order to convert the Pytorch weights (.pt) to a .cfg and .wts files readable by Deepstream. Furthermore, we shall define in config_infer_primary.txt
the engine file to be generated.
Going back to the Yolov5 export, it is possible to export it to a Tensor Rt engine file.
My question is, how should the config_infer_primary.txt
be configured in this case, as there are no custom-network-config (.cfg path) nor model-file (.wts path). Just engine file.
I tried to comment those lines but still does not work.
I also tried to leave it uncommented, but set the path to my engine file, but it overwrites it.
Environment
TensorRT Version: 8.0.1.6
GPU Type: Jetson NX Xavier
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Ubuntu 18.04.6 LTS
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Clone the Yolov5 repo, install all requirements and export a pretrained model to a Tensor RT engine file
git clone https://github.com/ultralytics/yolov5.git
cd yolov5; pip install -r requirements.txt
python export.py --weights yolov5n.pt --data data/coco.yaml --img 540 960 --batch-size 4 --device 0 --int8 --dynamic --nms --topk-all 100 --iou-thres 0.45 --conf-thres 0.25
Config_infer_primary.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#custom-network-config=yolov5s.cfg
#model-file=yolov5s.wts
model-engine-file=model_b4_gpu0_int8.engine
int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
symmetric-padding=1
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300