Nvinferserver with TensorRT OSS version

• Hardware GPU Nvidia Quardro P4000
• DeepStream container 5.1

Hello,
I have face recognition model glintr100.onnx from InsightFace-REST repo. I want to create TensorRT engine from onnx file however I got this error “Assertion failed: dims.nbDims == 4 || dims.nbDims == 5”. I solve this by build TensorRT OSS 7.2.1 (change some code using this pull request), which corresponding with DeepStream 5.1. However, I can not use this engine with nvinferserver.
I think that I need to build Triton from source (with new TensorRT), however I don’t know how to do that.
Please help me solve my issue.

What’s the error are you meeting and could you share the configs with us?

@bcao as I describe above I try to convert glintr100.onnx (which converted to dynamic batch using change_batch_onnx.py) to TensorRT using this command

/usr/src/tensorrt/bin/trtexec --explicitBatch
–shapes=input.1:6x3x112x112
–optShapes=input.1:2x3x112x112
–minShapes=input.1:1x3x112x112
–maxShapes=input.1:12x3x112x112
–verbose
–onnx=/opt/nvidia/deepstream/deepstream-5.1/samples/deepstream/tris_repo/glint/glintr100_change_batch.onnx
–saveEngine=/opt/nvidia/deepstream/deepstream-5.1/samples/deepstream/tris_repo/glint/1/glintr100_change_batch.trt

And I receive the error “Assertion failed: dims.nbDims == 4 || dims.nbDims == 5”

I fix this by build TensorRT OSS 7.2.1 (I change some code based on the pull request which fix the issue). I replace the libnvonnxparser.so.7.2.1 and libnvinfer_plugin.so.7.2.1 in /usr/lib/x86_64-linux-gnu by the file in TensorRT OSS.

I’m able to build TensorRT engine and run with nvinfer however when I run this engine using nvinferserver I got the following error

python3: gstnvinferserver_impl.cpp:1056: NvDsInferStatus gstnvinferserver::GstNvInferServerImpl::handleOutputTensors(gstnvinferserver::RequestBuffer*): Assertion `isCpuMem(desc.memType)’ failed.

change_batch_onnx.py (3.4 KB)
config.pbtxt (636 Bytes)

Could you share all the configs with us?

@bcao, I work on an closed source so I can not provide more detail information. Currently, I use fd with nvinferserver and fr with nvinfer, and I give up on building nvinferserver from source in DeepStream container. Anyways, thanks for your time!

Cool, please feel free to create new topic if you need further help.