Hi Nvidia team,
I am using:
Ubuntu 16.04 with RTX-2080-TI.
Driver Version: 410.78
CUDA version: V10.0.130
CUDNN version: 7.5.0
Python: 2.7.12
Tensorflow version: 1.13.1
(from “$dpkg -l | grep TensorRT”)
TensorRT version: 5.1.2.2-1+cuda10.0
uff-converter-tf: 5.1.2-1+cuda10.0
I followed the steps shared in your document
This Samples Support Guide provides an overview of all the supported NVIDIA TensorRT 8.4.3 samples included on GitHub and in the product package. The TensorRT samples specifically help in areas such as recommenders, machine comprehension, character...
When i tried to run the “yolov3_to_onnx.py” script
I encounter the error:
}
Traceback (most recent call last):
File “yolov3_to_onnx.py”, line 760, in
main()
File “yolov3_to_onnx.py”, line 753, in main
onnx.checker.check_model(yolov3_model_def)
File “/usr/local/lib/python2.7/dist-packages/onnx/checker.py”, line 86, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Op registered for Upsample is depracted in domain_version of 10
==> Context: Bad node spec: input: “085_convolutional_lrelu” output: “086_upsample” name: “086_upsample” op_type: “Upsample” attribute { name: “mode” s: “nearest” type: STRING } attribute { name: “scales” floats: 1 floats: 1 floats: 2 floats: 2 type: FLOATS }
Full output log file can be found here:
https://drive.google.com/open?id=1sl1RY7Q4ExMnhdZYow_sZFCVlSM5HY3I
Any idea why this is happening ?
Thanks,
Bental
i have same problem, but i did not use tensorflow
i think the problem is caused by ONNX,
opened 07:35AM - 25 May 19 UTC
closed 08:57AM - 25 May 19 UTC
onnx.onnx_cpp2py_export.checker.ValidationError: Op registered for Upsample is d… epracted in domain_version of 1
Can anybody help me???
3Q
but this topic didn’t get any help
I solved the problem by installing older version of onnx.
$ pip uninstall onnx; pip install onnx==1.3
On a Xavier NX on Jetpack 4.5.1 I needed:
pip uninstall onnx; pip install onnx==1.6
NVES
June 29, 2021, 5:21am
5
Hi,
Please refer to the installation steps from the below link if in case you are missing on anything
However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.
In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh
In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!