Increase the inference on jetson nano using tensort

Device: Jetson Nano Devkit(4GB)
os: Jetpack 4.5.1
cuda: 10.2

I am trying to run a use case using caffe model for person detection.
I have the following 2 files:

  1. Mobile_SSD_deploy.caffemodel
  2. Mobile_SSD_deploy_prototxt.txt

I ran the following command as usggested by @AastaLLL:
/usr/src/tensorrt/bin/trtexec --deploy=MobileNetSSD_deploy.prototxt.txt --model=MobileNetSSD_deploy.caffemodel --output=detection_out --output=keep_count

And following is the result:

After this when I ran the use case I get the following error:

I tried to find a solution online but not able to fix it.
Thanks.

Hi,

The backend inference frameworks are different.
trtexec is our inference library called TensorRT while OpenCV has its own implementation.

Based on your error log, the app cannot open camera correctly.
You can find some example in this forum.

For example, search OpenCV camera.
https://forums.developer.nvidia.com/search?q=opencv%20camera%20tags:gstreamer%20%23agx-autonomous-machines:jetson-embedded-systems

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.