Some question about Deep stream 5

Hi all,
I have some question,
1- what’s difference between Triton and TensorRT Inference Server?
2- Is it possible to run PeopleNet and FaceDetection of TLT in deep stream apps python? If the answer is no, so these models run the DeepStream close-source version? Is a no way to run these models in python code as free cost?
3- what’s diffrence between decoder of DeepStream in python and opencv + gstreamer in python?How optimal is the below method as well as the deepstream method?

> gstream_elemets = (
>                 'rtspsrc location=rtsp latency=300 !'
>                 'rtph264depay ! h264parse !'
>                 'omxh264dec!'
>                 'video/x-raw(memory:NVMM),format=(string)NV12 !'
>                 'nvvidconv ! video/x-raw , format=(string)BGRx !'
>                 'videoconvert ! video/x-raw , format=(string)BGR ! '
>                 'appsink'). 
> cv2.VideoCapture(gstream_elemets, cv2.CAP_GSTREAMER)

4- In this link, Is it possible to use sample codes for YOLOV3/FaceDetection/PeopleNet of TLT?

1- what’s difference between Triton and TensorRT Inference Server?

https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html : NVIDIA Triton Inference Server (formerly TensorRT Inference Server)

2- Is it possible to run PeopleNet and FaceDetection of TLT in deep stream apps python?

yes, you can

3- what’s diffrence between decoder of DeepStream in python and opencv + gstreamer in python?

OpenCV decoding does not use the NVIDIA HW decoder

4- In this link, Is it possible to use sample codes for YOLOV3/FaceDetection/PeopleNet of TLT?

yes, a simialr pipeline is https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps, but its source is local file instead of RTSP

Thanks .

OpenCV decoding does not use the NVIDIA HW decoder

Of course, only opencv doesn’t use NVIDIA HW decoder, but opencv + gstreamer with omx264dec or nvv4l2decoder use NVIDIA HW decoder.

yes, a simialr pipeline is https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps, but its source is local file instead of RTSP

Is it possible to convert local file source to RTSP in this sample code?

https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html : NVIDIA Triton Inference Server (formerly TensorRT Inference Server)

NVIDIA Triton Inference Server (formerly TensorRT Inference Server) provides a cloud inferencing solution optimized for NVIDIA GPUs.

So this is not suite for local inference in edge like jetson, right?

Of course, only opencv doesn’t use NVIDIA HW decoder, but opencv + gstreamer with omx264dec or nvv4l2decoder use NVIDIA HW decoder.

omx264dec is a old GStreamer plugin for decoder, it will be deprecated, so we recommend to use nvv4l2h264enc

Is it possible to convert local file source to RTSP in this sample code?

yes, you can make the change by yourself.
And, RTSP is a well-supported source in DeepStream, you could just configure the deepstream application configure file to enable RTSP source.

So this is not suite for local inference in edge like jetson, right?

DS5.0 supports Triton inference, you could refer to its doc.

Thanks!

Thanks,
nvv4l2decoder plugin is efficient than omx264dec?

Hi,

We are deprecating gst-omx plugins. Please check release notes.