Parameter check failed at: runtime/api/executionContext.cpp::enqueueV2::304, condition: !mEngine.hasImplicitBatchDimension()

Description

Unable to do inference on TensorRT with FaceDetect model from nvidia ngc here.

Environment

TensorRT Version: 8.2.3.0
GPU Type: dGPU - DeForce RTX 3070
Nvidia Driver Version: 510.47.03
CUDA Version: 11.6
CUDNN Version: 8.3
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt:22.02-py3

Relevant Files

face_detector1.py (3.8 KB)

Steps To Reproduce

Command used to convert the FaceDetect .etlt model from nvidia ngc to TensorRT engine:
./tao-converter assets/face_detect/model.etlt -d 1,3,416,736 -k nvidia_tlt -b 1 -e assets/face_detect/facedetect_fp16.engine -t fp16

Attached the inference code. When run inference on the generated engine, encountered the below issue:

[04/01/2022-18:18:17] [TRT] [E] 3: [executionContext.cpp::enqueueV2::304] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueV2::304, condition: !mEngine.hasImplicitBatchDimension()

I couldn’t understand the error clearly. I’ve set the batch size as 1 while converting the model to engine. Input dimension passed during inference - 1x3x416x736. Requesting support on this.

Thanks,
Arivarasan

Hi,

This looks like a TAO Toolkit related issue. We will move this post to the TAO Toolkit forum.

Thanks!

Any lead on this?

Refer to Run PeopleNet with tensorrt - #21 by carlos.alvarez

1 Like

The facenet is based on Nvidia detectnet_v2 network. Officially, the detectnet_v2 network inference is available in triton-app. GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

1 Like

This worked. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.