Run object detection from CSI camera and classification ( for 50 images 100*100*3) using TensorRT

Hello,
I have been working with tensorRT for a few weeks now and I am using jetson TX2. I have ran all the samples (classification and detection )from /usr/scr/tensorrt/samples for both C++ and python. I have also build jetson-inference (GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.) and ran the imagenet and detectnet samples and everything works.
I have trained ssd model and an image classifier for a custom object using tensorflow. I have the uff file for the ssd and also i have frozen.pb which i converted to frozen.uff for the classifier and tested them with the sample programs using images and everything works again.

But the next part of my application involves using video feed from the CSI camera to do the detection and process approx 50 images in one go for the classification and I am having difficulty to do so.

  1. Detection :- How can I run the ssd with the feed from CSI camera and .uff file (base model was from TensorFlow ) with tensorrt using C++ and PYTHON.

  2. Classification :- (Also build using tensorflow )I need to process 50 images in one go using C++ and PYTHON using my own model.

As I mentioned before, the models work fine with single image but I have no clue to set up tensorrt engine for a video steam and several images.

Hi,

We don’t have an exact sample of your use case but there are multiple way to try.
Here are some C++ based tutorial for your reference:

1. jetson_inference:
It supports CSI camera and you can try to update the input model into uff.

2. Deepstream 3.0:
It supports CSI camera + UFF model but it is only available for Xavier now.
We are going to release a new one for JetPack4.2 in Q2. It should also support TX2 at that time.

3. tf_to_trt_image_classification:
It supports uff model with OpenCV image input.
You can update the sample into video by GStreamer command.
[url]tf_to_trt_image_classification/classify_image.cu at master · NVIDIA-AI-IOT/tf_to_trt_image_classification · GitHub

Thanks.

The link shows the benchmark for jetson nano for various detectors and classifiers. I want to re-implement the results for ssd 300*300 but on videos to compare the fps.

The samples in the usr/src/tensorrt work on images. Aren’t the jetson embedded devices mjavascript:void(0);ostly meant to be used for the application involving the use of cameras for inference ? So why there are no samples that use the camera feed instead of images ??

Hi,

We do have some camera-based example, ex. jetson_inference, MMAPI.
But there are not using uff model.

We are not able to cover all the use case since there are too many possible combinations.
Maybe you can wait for DeepStream3.0 which can fulfill your requirement directly.

Thanks.