How to use uff in live inference with rasperri pi camera?

I done all things [How to use ssd_mobilenet_v2 - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums] and all is done. but my question how to use it in live inference using the pi camera ? and how can I extract the labels, bounding box x and y from it ?
Or is there any other way to use the SampleUffSSD instead of deepstream to do what I want ?

Hi,

You can update the configuration directly to run it with the live camera source.
Below are the example of CSI and USB cameras for your reference:

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt
/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=5
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.