Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Dear NVIDIA Support Team,
I am trying to integrate an emotion detection ONNX model into DeepStream 7.1, intended for use with the Clara Guardian platform. However, I am facing multiple critical issues during this process:
- TensorRT Engine Generation Errors:
- During engine generation using
trtexec
, I encountered dimension mismatch errors (profile has min=3, opt=3, max=3 but tensor has 160
) even after adjusting input shapes manually. - Required disabling
--explicitBatch
and correcting input dimensions to successfully build the.engine
file.
- DeepStream Pipeline Failure:
- After setting up the DeepStream configuration (
config_infer_primary_emotionnet.txt
), runningdeepstream-app
fails with the error:
"Failed to set pipeline to PAUSED"
"Output width not set"
(from src_bin_muxer). - It seems to be related to incomplete or incompatible configuration parameters, although standard setup guidelines were followed (setting
input-dims
,output-blob-names
, and linking the generated.engine
file).
- ONNX Model Format Concerns:
- The model input (
input_5
) has a dynamic dimension for batch size (unk__1418
) and static dimensions(160, 160, 3)
. Adjusting theinput-dims
in DeepStream didn’t fully resolve the integration problems.
- Impact:
- Due to these issues, real-time emotion detection integration with Clara Guardian is blocked, and project timelines are affected.
Request:
- Need support on properly preparing ONNX models and building TensorRT engines compatible with DeepStream 7.1.
- Need guidance on correct DeepStream configuration for custom emotion detection models with dynamic batch size ONNX inputs.
- Any recommendations if Triton Inference Server should be preferred over DeepStream for emotion detection with Clara Guardian integration.
I can provide model files, configuration files, logs, and detailed reproduction steps if needed.
Thank you for your assistance.