Model Type :: Onnx( Translated form Pytorch) Docker Image of TensorRT, Linux OS TensorRT Version : v8.2.0.6 GPU Type : Quadro P2000 Nvidia Driver Version : 510.73.05 CUDA Version : 11.4 Operating System + Version : OS 18.04
I am working with TensorRT Docker on Ubuntu OS.
I am trying to implement the calibrator in the C++ API based application.
GPU on my PC doesn’t have DLA , so I am facing Segmentation fault with the API call below:
Try using IInt8EntropyCalibrator instead of Int8EntropyCalibrator2. For Int8EntropyCalibrator2 it is required for DLA but not sure if it works only on DLA. see here.
Thanks !! 😊
BTW I need help here if you have any idea it will be great. I am basicaly doing something similar to you in C++ please have a look.
I am working on Docker setup on my Laptop which has a Quadro P2000 GPU( no DLA present).
As both IInt8EntropyCalibrator and Int8EntropyCalibrator2 is inherited from same class and has the same member function call, it is giving segmentation fault at API itself. Even class for IInt8EntropyCalibrator is missing from the header file. So , I created the similar class as of Int8EntropyCalibrator2 but din’t suceed.
If you are running your implemented calibrator on Docker , let me know.
I am new to these stuff but I think that the calibrator classes are defined in NvInfer.h here if I am not mistaken. I am working on Jetson AGX Orin and I would like to run an ONNX model and do quantization with INT8 calibration using API C++ but I don’t know how do do it, it is my first time that I toutch TensorRT. I am still surfing on Internet. If you figured out how to do it and you like to help me it will be great 😊 and I will help you of course if I could obviously 😁.
Please create a new post with the issue repro script/model.
Which platform are you using? If you’re using the Jetson platform,
we recommend you to please create a post in Jetson related forum to get better help.