Hardware Platform (Jetson / GPU) = Jetson nano
DeepStream Version = 6.0.1
JetPack Version (valid for Jetson only) = 4.6.1
TensorRT Version = 220.127.116.11-1+cuda10.2
Python version = 3.6.9
The standard python deepstream app is running fine with standard models
But when I am trying to use it with custom model getting error.
terminalOutput.txt (5.0 KB)
Please check the config file as well
dstest_segmentation_config_custom.txt (3.3 KB)
in the log , there is an error "Assertion failed: creator && “Plugin not found, are the plugin name”. did you use TensorRT plugin in your model? if using, please set LD_PRELOAD value.
No i didn’t use it.
Also getting same error from the model that you have provided
Is there a problem in my config file?
Is there any sample app in python deepstream app bindings where i can use onnx model.
about onnx model sample. please refer to this link.
I want onnx sample for deepstream-python-apps. I tried changing the config file of
I have shared with you earlier in this post not able to run the onnx file. But in comments of config file it is mentioned that we can use onnx file as well. So for that purpose i need some sample app in deepstream-python-apps for reference
if using onnx model, you need to do the following setting.
Please check the config file i have shared at the starting of my question i have already implemented that but it is not running. That’s why I put the question in forum and asking weather i have done something wrong
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
please refer to this topic.
the error “While parsing node number 340 [RoiAlign → “/1/RoiAlign_output_0”]” is because TensorRT 8.2 does not support that RoiAlign Ops.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.