Please provide the following info (tick the boxes after creating this topic):
DRIVE OS 18.104.22.168
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
Target Operating System
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
SDK Manager Version
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
I am creating a pipeline to crop+resize+normalize+quantize images.
I understood that DataConditioner is to be used to perform the normalization on image using mean and stddev. My current mean are ImageNet data. I have multiplied the mean and stddev by 255 as required by the API. The output image is in floating point.
I failed to understand which API to use to quantize the floating point data(of DataConditioner output) to int8 using scale and zero point.
Can you let me know this.
DW DNN module has API to load TRT model. Could you check using
--int8 flag while generate TRT model using
You can choose the detect region in the image and input image can be resized automatically by dataconditioner. Please take a look at object detector tracker sample.
Dear @SivaRamaKrishnaNV ,
Thanks for your message. I looked into the object detector tracker module. I understand that the input data to the network is float32 and output from network is also float32. So it means the model is float32 model and hence there was no need of quantization.
I went through all samples and documentation but couldn’t find out how to perform quantization on Images.
Let me know what is your thought? Is there any module you are developing to support quantization. If not, do you suggest to write a cuda kernel to perform the same.
When you generate the INT8 model using
int8 flag with
tensorRT_optimization tool, it generates INT8 model. Except input and output layers, all intermediate layers use INT8 data. Does it serve the purpose?
Firstly, please confirm if you are able to use your model with trtexec tool?
Firstly the compiled model is for compute 8.6 and my orin’s compute is 8.7, hence error.
Could you check flashing DRIVE OS 22.214.171.124 using docker and check running object detector sample on target directly with default settings. I don’t see any issue.
Please check Testing sample_object_detector_tracker on 6.0.8 fails . I could run the sample. Let me know if you face any issues.
I will try to figure out what is the issue in 6.0.6 as I have installed lot of dependencies in my target Orin.
As of now I see boundingboxes getting detected with very low confidence not exceeding threshold of .6. I will try to debug further.
Else coming Monday, I will install 6.0.8 and try the same.
I request you to please hold on to this ticket till then and not close it.
In-case you have the detector model for 8.7 compute, can you give it. I compiled the weights.onnx using tensorRT_optimization tool without passing any paramter. I am sure it compiled correctly, still wanted to compare against your 8.7 compatible model.
@SivaRamaKrishnaNV and @arkos am able to run the tensor optimization tool and got it working for a yolo model. Please let me know if you need some help
@0xdeadbeef. In DriveOS 6.0.6 ?
@SivaRamaKrishnaNV I am able to run yolo on 6.0.8. You can close the ticket