tools used for generating qat enabled yolov4_tiny module
1 Hardware GeForce RTX 4070 Ti
2 Network Type (yolov4_tiny )
3 TLT Version (format_version: 2.0, toolkit_version: 4.0.1)
generated files are
- .etlt file
- cal.json file
and they are trained by enabling QAT.
link to my previous question for overview is
Now i want to deploy the saved model on jetson
- Hardware Platform : Jetson xavier nx
- DeepStream Version : 6.2
- JetPack Version : 5.1
Now since it is QAT trained model i can only deploy it in int8 mode(correct?).
I am not able to get an idea how to do it.
please explain the procedure i am not able to build tensrrt oss with docs provided.
most probably because tensorrt i have is 22.214.171.124 and it is downloading older one,now i am trying release versions (8.5.2) file but still it is not clear to me.
any more help will be appreciated.
Also it says CMAKE_CUDA_COMPILER could be found.