Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.3.10904
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
I am creating a pipeline to crop+resize+normalize+quantize images.
I understood that DataConditioner is to be used to perform the normalization on image using mean and stddev. My current mean are ImageNet data. I have multiplied the mean and stddev by 255 as required by the API. The output image is in floating point.
I failed to understand which API to use to quantize the floating point data(of DataConditioner output) to int8 using scale and zero point.
Dear @arkos,
DW DNN module has API to load TRT model. Could you check using --int8 flag while generate TRT model using tensorRT_Optimization tool.
You can choose the detect region in the image and input image can be resized automatically by dataconditioner. Please take a look at object detector tracker sample.
Thanks for your message. I looked into the object detector tracker module. I understand that the input data to the network is float32 and output from network is also float32. So it means the model is float32 model and hence there was no need of quantization.
I went through all samples and documentation but couldn’t find out how to perform quantization on Images.
Let me know what is your thought? Is there any module you are developing to support quantization. If not, do you suggest to write a cuda kernel to perform the same.
Dear @arkos,
When you generate the INT8 model using int8 flag with tensorRT_optimization tool, it generates INT8 model. Except input and output layers, all intermediate layers use INT8 data. Does it serve the purpose?
Firstly, please confirm if you are able to use your model with trtexec tool?
Some observations after running the object_detector_tracker module. Firstly the compiled model is for compute 8.6 and my orin’s compute is 8.7, hence error.
So I compiled using the /data/samples/detector/weight.onnx using the tensorRT_optimization tool.
Once this is done I set the detectionRegion to 700x1270 and then pass to dwDataConditioner_prepareDataRaw().
But surprisingly I don’t see any car getting detected. Currently had set the datacondition mean and stddev to 0 and 1 respectively.
Firstly the compiled model is for compute 8.6 and my orin’s compute is 8.7, hence error.
Could you check flashing DRIVE OS 6.0.8.1 using docker and check running object detector sample on target directly with default settings. I don’t see any issue.
I will try to figure out what is the issue in 6.0.6 as I have installed lot of dependencies in my target Orin.
As of now I see boundingboxes getting detected with very low confidence not exceeding threshold of .6. I will try to debug further.
Else coming Monday, I will install 6.0.8 and try the same.
I request you to please hold on to this ticket till then and not close it.
In-case you have the detector model for 8.7 compute, can you give it. I compiled the weights.onnx using tensorRT_optimization tool without passing any paramter. I am sure it compiled correctly, still wanted to compare against your 8.7 compatible model.
@SivaRamaKrishnaNV and @arkos am able to run the tensor optimization tool and got it working for a yolo model. Please let me know if you need some help