TensorRT about. I can not run the sample of python, but can C++

  1. When I run the sample named “network_api_pytorch_mnist” by using python sample.py -d /home/myusername/TensorRT-5.0.2.6/python/data, it will return “Segmentation fault”.I check the problem and find that the error at the code ‘import
    model’.

  2. When I run other python sample like ‘end_to_end_tensorflow_mnist’, it will return the error as followed:

[TensorRT] ERROR: runtime.cpp(24) - Cuda Error in allocate: 2
[TensorRT] ERROR: runtime.cpp(24) - Cuda Error in allocate: 2
Traceback(most recent call last):
  File "sample.py", line 110, in <module>
    main()
  File "sample.py", line 96, in main
    with build_engine(model_file) as engine:
AttributeError: __exit__

I check the probloem and find the error at “builder.build_cuda_engine()” method.
How can I solve them?
Help me please.TAT

Hello,

CUDA Error 2 indicates the API call failed because it was unable to allocate enough memory to perform the requested operation. What type of GPU are you using?

Also recommend you to TensorRT containers from NVIDIA Graphics Cloud https://www.nvidia.com/en-us/gpu-cloud/ , which is designed to remove many of the library and configuration dependencies on the host.