yoloV7 onnx triton inference

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) T4
• DeepStream Version
• JetPack Version (valid for Jetson only)*
• TensorRT Version 8.2
**• NVIDIA GPU Driver Version (valid for GPU only)**11.6
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

during inference i get below error :

WARNING: Overriding infer-config batch-size 0  with number of sources  1  

Creating nvtracker 
 
Adding elements to Pipeline 

Linking elements in the Pipeline 

yolov7.py:953: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
  loop = GObject.MainLoop()
test---------
Now playing...: inputimages/horses.jpg
Starting pipeline 

0:00:00.390670756 24187      0x3f42a30 WARN           nvinferserver gstnvinferserver_impl.cpp:287:validatePluginConfig:<primary-inference> warning: Configuration file batch-size reset to: 1
INFO: infer_grpc_backend.cpp:164 TritonGrpcBackend id:5 initialized for model: yolov7
python3: infer_cuda_utils.cpp:86: nvdsinferserver::CudaTensorBuf::CudaTensorBuf(const nvdsinferserver::InferDims&, nvdsinferserver::InferDataType, int, const string&, nvdsinferserver::InferMemType, int, bool): Assertion `!hasWildcard(dims)' failed.
Aborted (core dumped)```

Please provide complete information as applicable to your setup.
• DeepStream Version

  1. here is a sample GitHub - thanhlnbka/yolov7-triton-deepstream
  2. which sample are your application based on?
  3. how do you generate yolov7 model? could you share the model configuration file for triton and deepstream configuration file?

will check the above repo and get back to you .

Deepstream 6.1

i generally use deepstream_python_apps/apps/deepstream-ssd-parser at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub as the base code to work on .

Is this still an issue to support? Thanks

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.