Re-inditification

i am new to deepstream , i am using devel server with a container and i want to use re-inditification model . so i downloaded it unzipped and try to use onnx in a python script but :aceback (most recent call last):
File “reid.py”, line 9, in
onnx_session = onnxruntime.InferenceSession(model_path)
File “/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File “/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 452, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from resnet50_market1501_aicity156.onnx failed:Protobuf parsing failed.

Seems the log you posted is irrelevant with Deepstream, can you share more context of the issue?

I’m having the same issue. and you’re right it’s not directly related to DeepStream.

I’m simply trying to run inference using the model provided here https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/reidentificationnet (same model as in this post) using the command

onnx_session = onnxruntime.InferenceSession(model_path)

The docs about the model mention a model load key

"The model is encrypted and can be decrypted with the following key:

  • Model load key: nvidia_tao

Please make sure to use this as the key for all TAO commands that require a model load key."

is this related somehow?

“Model load key” is for etlt/tlt model, which is encrypted and the key is needed when using such model. It’s not needed for ONNX model.
Can you share your code to reproduce the issue?

i tried to use etlt but i have an issue with the engine file i tried genrating it from onnx but it still makes an issue

i have a problem in general with model in deepstream .i download from ngc but its always an issue . onnx or etlt , is there a guide for thi specific part of models in deepstreamand thank you in advance

Normally you can use either onnx model or etlt model, you may check the usage of samples in folder /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps.
May be you can provide the full log and reproducing steps of the issue you met and let’s check what the problem is.

this what happens when i want to use any python binding app or even sample apps python3 deepstream_test_1.py reta.mp4
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file reta.mp4
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline . and its stuck like this


in the config file its :
model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin

but in the repo /resnet10.caffemodel_b1_gpu0_int8.engine this doesnt exist . i know you can build itb but i wanna the best practices here

The engine file is created from the model file if the program can run correctly.

deepstream_test1.py can only accept h264 file, you may refer to the description in README.

And I didn’t see the relation between “python3 deepstream_test1.py reta.mp4” and the code below, can you clarify?

Here is the code to reproduce thew issue

import onnxruntime
model_path = "resnet50_market1501_aicity156.onnx"
onnx_session = onnxruntime.InferenceSession(model_path)

The model was downloaded via

wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/tao/reidentificationnet/versions/deployable_v1.1/zip -O reidentificationnet_deployable_v1.1.zip

the model’s md5sum is af1c079ac1e8e068a43492d4d1dc5854

And the full log

---------------------------------------------------------------------------
InvalidProtobuf                           Traceback (most recent call last)
Cell In[11], line 1
----> 1 session = ort.InferenceSession(model_path)

File ~/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:419, in InferenceSession.__init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
    416 disabled_optimizers = kwargs["disabled_optimizers"] if "disabled_optimizers" in kwargs else None
    418 try:
--> 419     self._create_inference_session(providers, provider_options, disabled_optimizers)
    420 except (ValueError, RuntimeError) as e:
    421     if self._enable_fallback:

File ~/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:452, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
    450 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
    451 if self._model_path:
--> 452     sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
    453 else:
    454     sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)

InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from resnet50_market1501_aicity156.onnx failed:Protobuf parsing failed.

Moving to TAO forum for better support.

Hi,
After checking, unfortunately this onnx file is actually an .etlt file. The naming is not correct in ngc model card page.
I decrypt it and upload it here.
resnet50_market1501_aicity156_decode.onnx (91.9 MB)

@yingliu Thanks a lot! now I can load the model without any errors. One more question if you don’t mind: what steps did you take in order to decode the etlt file and convert it to onnx?

Please refer to Fpenet retraining output file onnx but deepstream is using tlt - #12 by Morganh.

i still have the same issue even with ‘sample_720p.h264’ it stops at creating pipeline .
about this , this is not directly about the python test1 but im trying to build a reidentification application using deepstream and i had issues building the model using onnx file
onnx_session = onnxruntime.InferenceSession(model_path)

Please follow official github https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/apps/tao_others/README.md to get started for running re-id network with deepstream.

Thank you . about the python biding test apps that stops after the starting pipeeline is there something missing . i am using triton devel and and cloned the python apps repo in ‘deepstream-6.1/sources/apps/deepstream_python_apps/apps/deepstream-test2’

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please try to run with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream . For this re-id model, as mentioned in TAO user guide, ReIdentificationNet - NVIDIA Docs, you can also run with GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.