Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) TAO over Ubuntu and triton on Docker image (triton 23.07_py3).
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) TAO 5.0
• Training spec file(If have, please share here) Jupyter Notebook example
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
Hi,
I trained the example model with the Kitty dataset, pruned it and retrained, then I followed some more steps because I wanted to benchmark different approaches, so at the end I have this:
- yolov4_resnet18_epoch_080.onnx
- trt.engine
- trt.engine.fp16
I am trying to load this models with an existing Triton inference server that has some other models running, but it gives different errors.
Are these models supposed to be loaded directly in the Triton server? The config.pbtxt for each try:
name: "test1"
platform: "tensorrt_plan"
max_batch_size: 1
default_model_filename: "trt.engine"
The model is in 1 folder and the file is trt.engine. The error says:
UNAVAILABLE: Internal: unable to load plan file to auto complete config: /models/test1/1/trt.engine
name: "test2"
platform: "onnxruntime_onnx"
max_batch_size: 1
The model is in 1 folder and the file is model.onnx. The error says:
UNAVAILABLE: Internal: onnx runtime error 10: Load model from /models/test1/1/model.onnx failed:This is an invalid model. In Node, (“BatchedNMS_N”, BatchedNMSDynamic_TRT, “”, -1) : (“box”: tensor(float),“cls”: tensor(float),) → (“BatchedNMS”: tensor(int32),“BatchedNMS_1”: tensor(float),“BatchedNMS_2”: tensor(float),“BatchedNMS_3”: tensor(float),) , Error No Op registered for BatchedNMSDynamic_TRT with domain_version of 12
What am I doing wrong?