How to use nvdsinferserver as PGIE and as SGIE without DtoH + HtoD copies in jetson?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Multiarch
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only) 535.183.01
• Issue Type( questions, new requirements, bugs) questions

Hi community,

I have a custom pytorch models for x64 and jetson for detection that serves inference requests using pyds as PGIE using nvdsinferserver and now, I need to add a custom pytorch classifier as an SGIE with nvdsinferserver to operate on clipped objects from PGIE without paying for costly DtoH (device to host) copies.

I’ve been browsing custom Yolo example and custom SSD example and both are running OK but those examples are copying tensors to host memory.

Is my pipeline nvdsmultiurisrcbin -> pgie -> sgie -> tensors feasible without c++ or python coding besides my pytorch models?

In case I need to code, how to avoid DtoH for tensor processing?
Is this tensor processing doable using python?

which sample are you testing or referring to? how do you know “hose examples are copying tensors to host memory”?

if not do DtoH, how do you do postprocessing?

which sample are you testing or referring to? how do you know “hose examples are copying tensors to host memory”?

I’m referring to

  1. opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-ssd-parser/deepstream_ssd_parser.py
  2. /opt/nvidia/deepstream/deepstream-6.4/sources/TritonOnnxYolo/nvdsinferserver_custom_impl_yolo/nvdsinferserver_custom_process_yolo.cpp

Because no cuda compiler or cuda library is in use here: /opt/nvidia/deepstream/deepstream-6.4/sources/TritonOnnxYolo/nvdsinferserver_custom_impl_yolo/Makefile

By coding a custom parser lib that uses cuda runtime api maybe?

you can set output_mem_type: MEMORY_TYPE_GPU in nvinferserver configuration file. then in inferenceDone, you will find desc.memType become 1(kGpuCuda).

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.