Unable to to import torch inside deepstream:5.1-21.02-triton container

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: deepstream:5.1-21.02-triton
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only): Driver Version: 460.32.03
• Issue Type( questions, new requirements, bugs) Issue
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Try installing torch>=1.7.0 inside deepstream:5.1-21.02-triton container and trying import torch.

Getting,

Traceback (most recent call last):
  File "deepstream/deepstream_yolov5/yolov5_main.py", line 535, in <module>
    sys.exit(main(sys.argv))
  File "deepstream/deepstream_yolov5/yolov5_main.py", line 424, in main
    import torch
  File "/usr/local/lib/python3.6/dist-packages/torch/__init__.py", line 196, in <module>
    from torch._C import *
ImportError: /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so: undefined symbol: _ZNK3c104Type14isSubtypeOfExtERKSt10shared_ptrIS0_EPSo

To solve this did, export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH. But getting the same error.

When overwritten all the LD_LIBRARY_PATH, export LD_LIBRARY_PATH=/usr/local/cuda/lib64, was able to import torch but got the following error:

Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
0:00:00.053752633   151      0x1b7a200 WARN      GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libtritonserver.so: cannot open shared object file: No such file or directory

(python3:151): GStreamer-WARNING **: 12:19:34.160: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
0:00:00.053921445   151      0x1b7a200 WARN      GST_PLUGIN_LOADING gstplugin.c:1329:gst_plugin_load_by_name: load_plugin error: Opening module failed: libtritonserver.so: cannot open shared object file: No such file or directory
0:00:00.053950290   151      0x1b7a200 WARN      GST_PLUGIN_LOADING gstpluginfeature.c:133:gst_plugin_feature_load: Failed to load plugin containing feature 'nvinferserver'.
0:00:00.053998831   151      0x1b7a200 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:397:gst_element_factory_create:<nvinferserver> loading plugin containing feature primary-inference returned NULL!
 Unable to create pgie 
Creating tiler 
 
Creating nvvidconv 

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi @abhishekdaryan14 ,
does this help you

@mchi I did see this solution, but I want to run triton server a long with deepstream. Thats why I chose deepstream:5.1-21.02-triton docker image.

Does nvcr.io/nvidia/deepstream:5.1-21.02-devel have all the required triton-server installations along with deepstream? If so, that would be wonderful.

I am trying to use ONNX model in triton server and using nvinferserver inside deepstream to run the pipeline.

DeepStream does not have all the triton-server components, the nvinferserver plugin in deepstream uses triton as its backend to inference the data from other GST plugins.

Can current triton in DS meet your requirement?

When I say running triton server along with deepstream, I meants deepstream using triton backend only. Sorry for miscommunication.

Yes, current triton is DS meets my requirements. I am able to run the deepstream with triton backend as well. Its just that I need to use pytorch to do some sort of post-processing on top of the results obtained from

So, there is not other issue for this ticket, right?

@mchi even though current triton in DS meets my requirements, I was not able to import and use torch inside deepstream:5.1-21.02-triton. As I could see, I can do that inside nvcr.io/nvidia/deepstream:5.1-21.02-devel, I still would like to know if nvcr.io/nvidia/deepstream:5.1-21.02-devel cones along with required triton environment as deepstream:5.1-21.02-triton container. If not what can be done to replicate the same environment. I would really appreciate if you could suggest a way to make this work.

I have provided the solution above.

what do you mean " required triton environment " ? could you elaborate it?

When I say required triton environment, I mean similar to the case why we use deepstream:5.1-21.02-triton container. To use triton backend in DS. Does nvcr.io/nvidia/deepstream:5.1-21.02-devel also supports running DS with triton backend?

As you know I cannot use deepstream:5.1-21.02-triton container to run triton backend in DS as I am unable to import torch. I would like to run DS with triton backend inside nvcr.io/nvidia/deepstream:5.1-21.02-devel. Is it possible?

As stated here on nvidia deepstream’s docker container section; nvcr.io/nvidia/deepstream:5.1-21.02-triton comes along with triton and deepstream dependencies preinstalled. Thats why I am able to run triton backend with DS.

So, the question is can it be done in nvcr.io/nvidia/deepstream:5.1-21.02-devel container as well?

Sorry! I don’t capture your point…
Say in another way, what failure do you meet with current DS docker? Is it “can’t import torch”? What’s the problem with the solution in How can i solve pytorch import in DeepStream 5.1(docker) - #7 by mchi ?

@mchi , I tried running my pipeline inside nvcr.io/nvidia/deepstream:5.1-21.02-devel container. I am able to import torch but getting the following error:

/opt/nvidia/yolov5
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
0:00:00.041191480    50      0x16d4590 WARN     GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory "nvinferserver"!
 Unable to create pgie 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating EGLSink 

Atleast one of the sources is live
Traceback (most recent call last):
  File "deepstream/deepstream_yolov5/yolov5_main.py", line 535, in <module>
    sys.exit(main(sys.argv))
  File "deepstream/deepstream_yolov5/yolov5_main.py", line 458, in main
    pgie.set_property('config-file-path', "deepstream/deepstream_yolov5/deepstream_yolov5_config.txt")
AttributeError: 'NoneType' object has no attribute 'set_property

seems like I dont have /lib/gst-plugins/libnvdsgst_inferserver.so lib plugin inside nvcr.io/nvidia/deepstream:5.1-21.02-devel container. This is what I was talking about running DS with triton backend.

Checked this issue: Problem with running centerface model. · Issue #4 · NVIDIA-AI-IOT/deepstream_triton_model_deploy · GitHub

confirmed nvcr.io/nvidia/deepstream:5.1-21.02-devel includes /opt/nvidia/deepstream/deepstream-5.1/lib/gst-plugins/libnvdsgst_inferserver.so

Not sure then why am I getting the above mentioned error. Any idea?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

do you find /opt/nvidia/deepstream/deepstream-5.1/lib/gst-plugins/libnvdsgst_inferserver.so in your docker?

could you try this Troubleshooting — DeepStream 6.1.1 Release documentation ?