Pytorch & torchversion compatible issue on L4T35.5.0

L4T 35.5.0
Jetpack 5.1.3

Any idea about this error? how can I fix it?

daniel@daniel-nvidia:~/Work$ yolo track model=yolov8n.engine source=../Videos/Worlds_longest_drone_fpv_one_shot.mp4
WARNING ⚠️ Python>=3.10 is required, but Python==3.8.10 is currently installed
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Ultralytics 8.3.20 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Orin, 7451MiB)
Loading yolov8n.engine for TensorRT inference...
[10/24/2024-08:05:05] [TRT] [I] Loaded engine size: 13 MiB
[10/24/2024-08:05:05] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[10/24/2024-08:05:07] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +616, GPU +450, now: CPU 1003, GPU 4725 (MiB)
[10/24/2024-08:05:07] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +12, now: CPU 0, GPU 12 (MiB)
[10/24/2024-08:05:07] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 990, GPU 4713 (MiB)
[10/24/2024-08:05:07] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +18, now: CPU 0, GPU 30 (MiB)

/home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
Traceback (most recent call last):
  File "/home/daniel/.local/bin/yolo", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/cfg/__init__.py", line 824, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/model.py", line 601, in track
    return self.predict(source=source, stream=stream, **kwargs)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/model.py", line 554, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/predictor.py", line 183, in predict_cli
    for _ in gen:  # sourcery skip: remove-empty-nested-block, noqa
  File "/home/daniel/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/predictor.py", line 261, in stream_inference
    self.results = self.postprocess(preds, im, im0s)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/models/yolo/detect/predict.py", line 25, in postprocess
    preds = ops.non_max_suppression(
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/utils/ops.py", line 292, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "/home/daniel/.local/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 40, in nms
    _assert_has_ops()
  File "/home/daniel/.local/lib/python3.8/site-packages/torchvision/extension.py", line 46, in _assert_has_ops
    raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.

pytorch & torchvision version:

daniel@daniel-nvidia:~/Work$ python -c "import torch; print(torch.__version__)"
2.1.0a0+41361538.nv23.06
daniel@daniel-nvidia:~/Work$ python -c "import torchvision; print(torchvision.__version__)"
/home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
0.16.2+c6f3977

PS: guide are here: NVIDIA Jetson - Ultralytics YOLO Docs

Hi,

Based on our recommendation below, please try TorchVision v0.16.1 for PyTorch v2.1:

Thanks.

I hope it will be on JetPack 5.1.3(L4T35.5.0) which runs ubuntu 20.04, supporing ROS not ROS2.

BTW, the above versions are from nvidia compatible list: PyTorch for Jetson

So I don’t why it NOT working now?

I have tried below version for JetPack 5.1.3, but…not working

Hi,

How do you install TorchVision? Do you build it from the source on Jetson?
More, please note that you will need to use v0.16.1 instead of v0.16.2.

Thanks.

Just follow the link: NVIDIA Jetson - Ultralytics YOLO Docs

sudo apt install -y libjpeg-dev zlib1g-dev
git clone https://github.com/pytorch/vision torchvision
cd torchvision
git checkout v0.16.2
python3 setup.py install --user

But I have met error below. And i switched to pip install .

$ python3 setup.py install --user
setup.py:10: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.h                                        tml
  from pkg_resources import DistributionNotFound, get_distribution, parse_version
Building wheel torchvision-0.16.2+c6f3977
Compiling extensions with following flags:
  FORCE_CUDA: False
  FORCE_MPS: False
  DEBUG: False
  TORCHVISION_USE_PNG: True
  TORCHVISION_USE_JPEG: True
  TORCHVISION_USE_NVJPEG: True
  TORCHVISION_USE_FFMPEG: True
  TORCHVISION_USE_VIDEO_CODEC: True
  NVCC_FLAGS:
Compiling with debug mode OFF
Found PNG library
Building torchvision with PNG image support
  libpng version: 1.6.37
  libpng include path: /usr/include/libpng16
Running build on conda-build: False
Running build on conda: False
Building torchvision with JPEG image support
  libjpeg include path: None
  libjpeg lib path: None
Building torchvision without NVJPEG image support
Building torchvision without ffmpeg support
Building torchvision without video codec support
running install
/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install                                         is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` directly.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
        ********************************************************************************

!!
  self.initialize_options()
Traceback (most recent call last):
  File "setup.py", line 542, in <module>
    setup(
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/__init__.py", line 117, in setup
    return distutils.core.setup(**attrs)
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 183, in setup
    return run_commands(dist)
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 199, in run_commands
    dist.run_commands()
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands
    self.run_command(cmd)
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/dist.py", line 991, in run_command
    super().run_command(command)
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 972, in run_command
    cmd_obj.ensure_finalized()
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 111, in ensure_finalized
    self.finalize_options()
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/command/install.py", line 67, in finalize_options
    super().finalize_options()
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_distutils/command/install.py", line 408, in finalize_options
    'dist_fullname': self.distribution.get_fullname(),
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_core_metadata.py", line 267, in get_fullname
    return _distribution_fullname(self.get_name(), self.get_version())
  File "/home/daniel/.local/lib/python3.8/site-packages/setuptools/_core_metadata.py", line 285, in _distribution_fullname
    canonicalize_version(version, strip_trailing_zero=False),
TypeError: canonicalize_version() got an unexpected keyword argument 'strip_trailing_zero'

Yolo instruction says 0.16.2. I’ll try 0.16.1

0.16.1 doesn’t work either.

daniel@daniel-nvidia:~/Work$ yolo track model=yolov8n.engine source=../Videos/Worlds_longest_drone_fpv_one_shot.mp4
WARNING ⚠️ Python>=3.10 is required, but Python==3.8.10 is currently installed
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Ultralytics 8.3.21 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Orin, 7451MiB)
Loading yolov8n.engine for TensorRT inference...
[10/25/2024-13:32:49] [TRT] [I] Loaded engine size: 13 MiB
[10/25/2024-13:32:51] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +616, GPU +769, now: CPU 1003, GPU 3677 (MiB)
[10/25/2024-13:32:51] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +12, now: CPU 0, GPU 12 (MiB)
[10/25/2024-13:32:51] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +27, now: CPU 990, GPU 3692 (MiB)
[10/25/2024-13:32:52] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +18, now: CPU 0, GPU 30 (MiB)

/home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
Traceback (most recent call last):
  File "/home/daniel/.local/bin/yolo", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/cfg/__init__.py", line 824, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/model.py", line 601, in track
    return self.predict(source=source, stream=stream, **kwargs)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/model.py", line 554, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/predictor.py", line 183, in predict_cli
    for _ in gen:  # sourcery skip: remove-empty-nested-block, noqa
  File "/home/daniel/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/engine/predictor.py", line 261, in stream_inference
    self.results = self.postprocess(preds, im, im0s)
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/models/yolo/detect/predict.py", line 25, in postprocess
    preds = ops.non_max_suppression(
  File "/home/daniel/.local/lib/python3.8/site-packages/ultralytics/utils/ops.py", line 292, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "/home/daniel/.local/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 40, in nms
    _assert_has_ops()
  File "/home/daniel/.local/lib/python3.8/site-packages/torchvision/extension.py", line 46, in _assert_has_ops
    raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.

torch torchvision versions:

daniel@daniel-nvidia:~/Work$ python -c "import torch; import torchvision; print(f'PyTorch version: {torch.__version__}'); print(f'Torchvision version: {torchvision.__version__}')"

/home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
PyTorch version: 2.1.0a0+41361538.nv23.06
Torchvision version: 0.16.1+fdea156

Hi,

Do you have CUDA preinstalled in your environment?
The TorchVision is built without CUDA somehow. Could you install CUDA and try it again?

Compiling extensions with following flags:
  FORCE_CUDA: False

One of our users has met a similar issue and fixed it by rebuilding the TorchVision 0.16.1.
So the version should be compatible but you might need a package that has enabled GPU support.

Thanks.

CUDA is installed I think.

daniel@daniel-nvidia:~$ dpkg-query --show nvidia-jetpack
nvidia-jetpack  5.1.3-b29
daniel@daniel-nvidia:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Sun_Oct_23_22:16:07_PDT_2022
Cuda compilation tools, release 11.4, V11.4.315
Build cuda_11.4.r11.4/compiler.31964100_0
daniel@daniel-nvidia:~$ cd /usr/local/cuda/samples
daniel@daniel-nvidia:/usr/local/cuda/samples$ ./deviceQuery
-bash: ./deviceQuery: No such file or directory
daniel@daniel-nvidia:/usr/local/cuda/samples$ cd bin/aarch64/linux/release
daniel@daniel-nvidia:/usr/local/cuda/samples/bin/aarch64/linux/release$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Orin"
  CUDA Driver Version / Runtime Version          11.4 / 11.4
  CUDA Capability Major/Minor version number:    8.7
  Total amount of global memory:                 7451 MBytes (7813234688 bytes)
  (008) Multiprocessors, (128) CUDA Cores/MP:    1024 CUDA Cores
  GPU Max Clock rate:                            624 MHz (0.62 GHz)
  Memory Clock rate:                             624 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS

install v0.16.0 still not working.

git checkout v0.16.0

Here is Yolo dev’s feed back “Jetson Orin Nano JetPack 5.1.3 install latest yolov5, RuntimeError: Couldn't load custom C++ ops · Issue #13392 · ultralytics/yolov5 · GitHub” on this issue.

And I don’t know which step I’m wrong, see below:

The only thing I have changed is use pip install . instead of python3 setup.py install --user. As this command in guide didn’t work when install.

pip uninstall torch torchvision

wget https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl -O torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
pip install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl


sudo apt install -y libjpeg-dev zlib1g-dev
git clone git@github.com:pytorch/vision.git torchvision
cd torchvision
git checkout v0.16.2
pip install .

pip install numpy==1.23.5

Hi,

If the issue goes on, are you able to upgrade your environment to JetPack 6.1?
As our users don’t meet such errors when using YOLO on the JetPack 6 environment.

Thanks.

No, as I have said that > JetPack 5.1.3 do NOT support ROS.
So I need yolo work on JetPack 5.1.3.

Can you try the steps(install yolo or yolo5 from source) as I did, following guide on the production version of JetPack 5.1.3. Then check if you can replicate the issues I encountered?

The “custom C++ ops” or “torch._custom_ops” refers to custom operations implemented in C++ for PyTorch. Please checkout here for details: Jetson Orin Nano JetPack 5.1.3 install latest yolov5, RuntimeError: Couldn't load custom C++ ops · Issue #13392 · ultralytics/yolov5 · GitHub.

So is it incompatible inside of pytorch? Or something I have missed?

@AastaLLL

It appears to me that it’s NVIDIA custom build issue on JetPack 5(Production Release). But I don’t know how to fix it.

Did you replicate the issues?
Or all the dev are transit from JetPack 5 for JetPack 6? there is not support for further JetPack 5 problem?

Hi,

Could you check if you are using the correct Ultralytics software?
There are some warnings in your log seem related to the incompatible software:

daniel@daniel-nvidia:~/Work$ yolo track model=yolov8n.engine source=../Videos/Worlds_longest_drone_fpv_one_shot.mp4
WARNING ⚠️ Python>=3.10 is required, but Python==3.8.10 is currently installed

We tested YOLO with JetPack 5 and it can work correctly (yolo predict with yolo11n).
Here are the detailed steps for your reference:

$ get https://developer.download.nvidia.cn/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
$ pip3 install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl 
$ git clone --branch v0.16.1 https://github.com/pytorch/vision torchvision
$ cd torchvision/
$ export BUILD_VERSION=0.16.1
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev
$ sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev
$ python3 setup.py install --user
$ pip3 install ultralytics
$ yolo export model=yolo11n.pt format=engine  # creates 'yolo11n.engine'
WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics 8.3.27 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Xavier, 30991MiB)
YOLO11n summary (fused): 238 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs

PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB)
...
[11/04/2024-07:28:05] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1597, GPU 10616 (MiB)
[11/04/2024-07:28:05] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +0, GPU +17, now: CPU 0, GPU 17 (MiB)
TensorRT: export success ✅ 239.6s, saved as 'yolo11n.engine' (13.5 MB)

Export complete (245.1s)
Results saved to /home/nvidia/topic_310929
Predict:         yolo predict task=detect model=yolo11n.engine imgsz=640  
Validate:        yolo val task=detect model=yolo11n.engine imgsz=640 data=/usr/src/ultralytics/ultralytics/cfg/datasets/coco.yaml  
Visualize:       https://netron.app
💡 Learn more at https://docs.ultralytics.com/modes/export
$ yolo predict model=yolo11n.engine source='https://ultralytics.com/images/bus.jpg'
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Ultralytics 8.3.27 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Xavier, 30991MiB)
Loading yolo11n.engine for TensorRT inference...
[11/04/2024-07:28:54] [TRT] [I] Loaded engine size: 13 MiB
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +343, GPU +324, now: CPU 690, GPU 8532 (MiB)
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +14, now: CPU 0, GPU 14 (MiB)
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 677, GPU 8532 (MiB)
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +20, now: CPU 0, GPU 34 (MiB)

Downloading https://ultralytics.com/images/bus.jpg to 'bus.jpg'...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 134k/134k [00:00<00:00, 954kB/s]
image 1/1 /home/nvidia/topic_310929/bus.jpg: 640x640 4 persons, 1 bus, 10.3ms
Speed: 9.0ms preprocess, 10.3ms inference, 8.0ms postprocess per image at shape (1, 3, 640, 640)
Results saved to runs/detect/predict
💡 Learn more at https://docs.ultralytics.com/modes/predict

Thanks.

@AastaLLL Which version are you using?

PS: I’m using 5.1.3, and I have re-installed system, fixed some dependency issue, still can’t get it work.

  • attached logs for console commands I have entered.

jetpack5.13-yolo.txt (294.6 KB)

EDIT: Maybe 5.1.4, I didn’t know what the version is. I’ll try latest 5.1.4 again. But the above log is indeed a fresh new installed 5.1.3 or 5.1.4 system.

OK, I got 5.1.4 working.

$ pip install numpy==1.23.5