Here is the full error message
Error [executionContext.cpp::enqueueV2::520] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueV2::520, condition: !mEngine.hasImplicitBatchDimension()?
My project based on Deepstream-yolo repo with commit SHA - 68f762d GitHub - marcoslucianops/DeepStream-Yolo at 68f762d5bdeae7ac3458529bfe6fed72714336ca
I generated YOLOv4 int8 engine model and use repo TensorRT-For-YOLO-Series/trt.py at main · Linaom1214/TensorRT-For-YOLO-Series · GitHub to evaluate engine model, but I got an error
[executionContext.cpp::enqueueV2::520] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueV2::520, condition: !mEngine.hasImplicitBatchDimension()
This error is related with TensorRT. I searched for but not found solutions. I am using Tensort 184.108.40.206 by Docker
How to fix this? Thanks
Please share with minimal issue repro ONNX model for better debugging.
I also encountered similar situations Problem with：
Error response from daemon: login attempt to https://nvcr.io/v2/ failed with status: 502 Bad Gateway
I have informed the team that manages the site.
Please stand by as we work to resolve this.
Thanks for your patience.
Thanks, it seem that I used implicit quantization with batch size = 1. When running inference, I used
execute_async_v2() and it did not work.
Are you still facing this issue.
Please share the minimal issue repro ONNX model for better debugging.
I solved problem by using
execute_async() work, but output of engine model is seem not correct.
I debug my inference code and recognize that output values is very small. Output has length 4,
predictions is class index of my model (from 0 to 79), but
4.2e-42 is obtained. This values seem allocated values from system.
Please share the minimal issue repro ONNX model, scripts/sample data and output logs for better debugging.
Are you still facing the above issue?
If you still face the issue, please open a new post with complete error logs and minimal issue repro (ONNX model and scripts) for better debugging.
Thanks. It wasn’t solved. I used different commit and can run inference.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.