How to use original Re-ID .pth models in DeepSort

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Is there any way to use a Re-ID model (.pth) trained in PyTorch with DeepSort?
Converting mars-small128.pb to mars-small128.uff has worked well.
However, the conversion does not work on my .pth model.
I get an error when I try to convert .pth → .onnx → .pb and then convert .pb to .uff.
The script I used for the conversion is below.

/opt/nvidia/deepstream/deepstream-6.0/sources/tracker_DeepSORT/convert.py

When I do the conversion, I get the following error.

root@56df9a915406:/home/develop/DetectorDeepStream/models# python3 /opt/nvidia/deepstream/deepstream-6.0/sources/tracker_DeepSORT/convert.py saved_model.pb
Traceback (most recent call last):
File “/opt/nvidia/deepstream/deepstream-6.0/sources/tracker_DeepSORT/convert.py”, line 27, in
dynamic_graph = gs.DynamicGraph(filename_pb)
File “/usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py”, line 79, in init
self.read(graphdef)
File “/usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py”, line 173, in read
self._internal_graphdef.ParseFromString(frozen_pb.read())
google.protobuf.message.DecodeError: Error parsing message with type ‘tensorflow.GraphDef’

Is there a wrong way to do the conversion?

Hi @yuma.oyama ,
We are checking this… will feedback to you later

Hi @yuma.oyama ,

could you just build TensorRT engine with /usr/src/tensorrt/bin/trtexec in the DeepStream docker and provide DeepSort TRT engine in the tracker config use TensorRT engine ?

@mchi
Thank you for reply.

could you just build TensorRT engine with /usr/src/tensorrt/bin/trtexec in the DeepStream docker and provide DeepSort TRT engine in the tracker config use TensorRT engine ?

Does this mean I can set .trt in the modelEngineFile?

ReID:
reidType: 1 # the type of reid among { DUMMY=0, DEEP=1 }
batchSize: 100 # batch size of reid network
workspaceSize: 1000 # workspace size to be used by reid engine, in MB
reidFeatureSize: 128 # size of reid feature
reidHistorySize: 100 # max number of reid features kept for one object
inferDims: [128, 64, 3] # reid network input dimension CHW or HWC based on inputOrder
inputOrder: 1 # reid network input order among { NCHW=0, NHWC=1 }
colorFormat: 1 # reid network input color format among {RGB=0, BGR=1 }
networkMode: 0 # reid network inference precision mode among {fp32=0, fp16=1, int8=2 }
offsets: [0.0, 0.0, 0.0] # array of values to be subtracted from each input channel, with length equal to number of channels
netScaleFactor: 1.0 # # scaling factor for reid network input after substracting offsets
inputBlobName: "images" # reid network input layer name
outputBlobName: "features" # reid network output layer name
uffFile: "models/mars-small128.uff" # absolute path to reid network uff model
modelEngineFile: "models/mars-small128.uff_b100_gpu0_fp32.engine" # engine file path
keepAspc: 1 # whether to keep aspc ratio when resizing input objects for reid

yes, I mean this. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.