Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Multiarch
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only) 535.183.01
• Issue Type( questions, new requirements, bugs) questions
Hi community,
I have a custom pytorch models for x64 and jetson for detection that serves inference requests using pyds as PGIE using nvdsinferserver
and now, I need to add a custom pytorch classifier as an SGIE with nvdsinferserver
to operate on clipped objects from PGIE without paying for costly DtoH (device to host) copies.
I’ve been browsing custom Yolo example and custom SSD example and both are running OK but those examples are copying tensors to host memory.
Is my pipeline nvdsmultiurisrcbin -> pgie -> sgie -> tensors
feasible without c++ or python coding besides my pytorch models?
In case I need to code, how to avoid DtoH for tensor processing?
Is this tensor processing doable using python?