Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Is there any way to use a Re-ID model (.pth) trained in PyTorch with DeepSort?
Converting mars-small128.pb to mars-small128.uff has worked well.
However, the conversion does not work on my .pth model.
I get an error when I try to convert .pth → .onnx → .pb and then convert .pb to .uff.
The script I used for the conversion is below.
/opt/nvidia/deepstream/deepstream-6.0/sources/tracker_DeepSORT/convert.py
When I do the conversion, I get the following error.
root@56df9a915406:/home/develop/DetectorDeepStream/models# python3 /opt/nvidia/deepstream/deepstream-6.0/sources/tracker_DeepSORT/convert.py saved_model.pb
Traceback (most recent call last):
File “/opt/nvidia/deepstream/deepstream-6.0/sources/tracker_DeepSORT/convert.py”, line 27, in
dynamic_graph = gs.DynamicGraph(filename_pb)
File “/usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py”, line 79, in init
self.read(graphdef)
File “/usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py”, line 173, in read
self._internal_graphdef.ParseFromString(frozen_pb.read())
google.protobuf.message.DecodeError: Error parsing message with type ‘tensorflow.GraphDef’
Is there a wrong way to do the conversion?