Module 'tensorrt' has no attribute 'Logger'

Description

Running in CLI works
image
but when running in a python script:

import tensorrt as trt
logger = trt.Logger(trt.Logger.WARNING)

AttributeError: module ‘tensorrt’ has no attribute ‘Logger’

Environment

TensorRT Version: 8.4.1-1+cuda11.4
GPU Type: Jetson Orin AGX
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: Jetpack 5.0.2
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3

Steps To Reproduce

Run in python

import tensorrt as trt
import pycuda
import pycuda.autoinit
import pycuda.driver as cuda
import numpy as np
import cv2

if __name__ == "__main__":
    logger = trt.Logger(trt.Logger.WARNING)
    trt.init_libnvinfer_plugins(logger,'')

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything

Also, we suggest you to use TRT NGC containers to avoid any system dependency related issues.

Thanks!

This is the output when i run the TensorRT container

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 22.07 (build 40077977)
NVIDIA TensorRT Version 8.4.1
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.
WARNING: Detected NVIDIA Orin GPU, which is not yet supported in this version of the container
ERROR: No supported GPU(s) detected to run this container

Failed to detect NVIDIA driver version.

print(dir(trt)) returns

on CLI:

['ActivationType', 'AllocatorFlag', 'Builder', 'BuilderFlag', 'CURDIR', 'CaffeParser', 'CalibrationAlgoType', 'DataType', 'DeviceType', 'Dims', 'Dims2', 'Dims3', 'Dims4', 'DimsHW', 'ElementWiseOperation', 'EngineCapability', 'EngineInspector', 'ErrorCode', 'ErrorCodeTRT', 'FallbackString', 'FieldCollection', 'FieldMap', 'FieldType', 'FillOperation', 'GatherMode', 'IActivationLayer', 'IAlgorithm', 'IAlgorithmContext', 'IAlgorithmIOInfo', 'IAlgorithmSelector', 'IAlgorithmVariant', 'IAssertionLayer', 'IBlobNameToTensor', 'IBuilderConfig', 'ICaffePluginFactoryV2', 'IConcatenationLayer', 'IConditionLayer', 'IConstantLayer', 'IConvolutionLayer', 'ICudaEngine', 'IDeconvolutionLayer', 'IDequantizeLayer', 'IEinsumLayer', 'IElementWiseLayer', 'IErrorRecorder', 'IExecutionContext', 'IFillLayer', 'IFullyConnectedLayer', 'IGatherLayer', 'IGpuAllocator', 'IHostMemory', 'IIdentityLayer', 'IIfConditional', 'IIfConditionalBoundaryLayer', 'IIfConditionalInputLayer', 'IIfConditionalOutputLayer', 'IInt8Calibrator', 'IInt8EntropyCalibrator', 'IInt8EntropyCalibrator2', 'IInt8LegacyCalibrator', 'IInt8MinMaxCalibrator', 'IIteratorLayer', 'ILRNLayer', 'ILayer', 'ILogger', 'ILoop', 'ILoopBoundaryLayer', 'ILoopOutputLayer', 'IMatrixMultiplyLayer', 'INetworkDefinition', 'IOptimizationProfile', 'IPaddingLayer', 'IParametricReLULayer', 'IPluginCreator', 'IPluginRegistry', 'IPluginV2', 'IPluginV2Ext', 'IPluginV2Layer', 'IPoolingLayer', 'IProfiler', 'IQuantizeLayer', 'IRNNv2Layer', 'IRaggedSoftMaxLayer', 'IRecurrenceLayer', 'IReduceLayer', 'IResizeLayer', 'IScaleLayer', 'IScatterLayer', 'ISelectLayer', 'IShapeLayer', 'IShuffleLayer', 'ISliceLayer', 'ISoftMaxLayer', 'ITensor', 'ITimingCache', 'ITopKLayer', 'ITripLimitLayer', 'IUnaryLayer', 'LayerInformationFormat', 'LayerType', 'Logger', 'LoopOutput', 'MatrixOperation', 'MemoryPoolType', 'NetworkDefinitionCreationFlag', 'NodeIndices', 'OnnxParser', 'PaddingMode', 'ParserError', 'Permutation', 'PluginField', 'PluginFieldCollection', 'PluginFieldCollection_', 'PluginFieldType', 'PoolingType', 'Profiler', 'ProfilingVerbosity', 'QuantizationFlag', 'RNNDirection', 'RNNGateType', 'RNNInputMode', 'RNNOperation', 'ReduceOperation', 'Refitter', 'ResizeCoordinateTransformation', 'ResizeMode', 'ResizeRoundMode', 'ResizeSelector', 'Runtime', 'ScaleMode', 'ScatterMode', 'SliceMode', 'SubGraphCollection', 'TacticSource', 'TensorFormat', 'TensorLocation', 'TopKOperation', 'TripLimit', 'UffInputOrder', 'UffParser', 'UnaryOperation', 'Weights', 'WeightsRole', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', '_itemsize', 'attr', 'bool', 'common_enter', 'common_exit', 'ctypes', 'find_lib', 'float16', 'float32', 'get_builder_plugin_registry', 'get_nv_onnx_parser_version', 'get_plugin_registry', 'glob', 'init_libnvinfer_plugins', 'int32', 'int8', 'lib', 'nptype', 'os', 'shutdown_protobuf_library', 'sys', 'tensorrt', 'try_load', 'value', 'volume', 'warnings']

in python script:
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__']

It seems that if I create a python script in /
it works as intended but when i create it in my host directory that i mounted, it does not import correctly…

Hi,

Could you please try on the latest TensorRT NGC container and let us know if you still face this issue.
Please share with us the command you’re using for running the container.

Thank you.