• Hardware Platform Jetson • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) 4.4
Hi, so we have deployed the tlt peoplenet model on a site with 11 cameras and it is running a bit slow. We have played with lowering the resolution of the cameras, decreasing the fps to 8-10, setting the interval in pgie config. Obviously the greater the interval there is a decrease in the accuracy we observed above 2. Our next steps is to look into using int8 and dla as we are using the Xavier.
So, how do we create an int8-calibration file for the PeopleNet tlt model? Then i know you can specify enable-dla and that in config for deepstream app but how do i use dla and gpu, do i have to create multiple pgie’s and assign cameras to each?
Then i know you can specify enable-dla and that in config for deepstream app but how do i use dla and gpu, do i have to create multiple pgie’s and assign cameras to each?
yes, for DLA and GPU, you need have seperate TRT engines.
how do we create an int8-calibration file for the PeopleNet tlt model?
Is using the pruned PeopleNet ResNet34 .etlt possible instead of generic detectnet_v2 tlt? I need to run PeopleNet on the DLA cores (for that I need to export it using the --force-ptq flag) but it seems NGC does not have the pruned .tlt and PeopleNet was trained on some proprietary dataset.