Creating engine for yolov4_tiny on jetson xavier nx

tools used for generating qat enabled yolov4_tiny module
1 Hardware GeForce RTX 4070 Ti
2 Network Type (yolov4_tiny )
3 TLT Version (format_version: 2.0, toolkit_version: 4.0.1)

generated files are

  • .etlt file
  • cal.json file
    and they are trained by enabling QAT.

link to my previous question for overview is



Now i want to deploy the saved model on jetson

  • Hardware Platform : Jetson xavier nx
  • DeepStream Version : 6.2
  • JetPack Version : 5.1

Now since it is QAT trained model i can only deploy it in int8 mode(correct?).
I am not able to get an idea how to do it.
please explain the procedure i am not able to build tensrrt oss with docs provided.
most probably because tensorrt i have is 8.5.2.2 and it is downloading older one,now i am trying release versions (8.5.2) file but still it is not clear to me.
any more help will be appreciated.
Also it says CMAKE_CUDA_COMPILER could be found.

What do you mean below?

sorry i was trying to generate tensorrt oss for tensorrt 8.5.2 using old tensorrt version i successfully generated the file by making necessary changes (i was using tensorrt 7 repo).
thanks anyways.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.