ComfyUi error float8_e4m3fn with Pytorch

hello i have Jetson AGX Xavier
installed torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
and torchvision
git clone --branch v0.16.1 GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision torchvision
python3 setup.py install --user

but when i run ComfyUi workflow (Flux)
i have always this error AttributeError: module ‘torch’ has no attribute ‘float8_e4m3fn’

tried several things, but cant find how to fix it (tried even install CUDA 11.8), but still same error
any idea how to fix its ?

Hi,

Based on the below link, it seems to be a compatibility issue with the transformers package.
Could you try the suggestion to see if it helps?

Thanks.

thanks tried
transformers-4.43.2
transformers-4.42.4
but still module ‘torch’ has no attribute ‘float8_e4m3fn’
any idea how to fix it ?

can we use newer version pytorch…?
looks like 2.1 is not well supported

Hi,

The latest PyTorch prebuilt we provided for Jetson is v2.1.
But you can build it from the source to get a newer version.

The below topic contains the building instructions for your reference:

Thanks.

thanks tried to compile pytorch 2.4.0 manually but still without cuda support… any idea whats wrong ?
here is end of script looks like cuda is detected but in python its not working

running build_ext
-- Building with NumPy bindings
-- Detected cuDNN at ,
-- Detected CUDA at /usr/local/cuda
-- Not using MKLDNN
-- Building NCCL library
-- Building with distributed package:
  -- USE_TENSORPIPE=True
  -- USE_GLOO=True
  -- USE_MPI=False
-- Building Executorch
-- Not using ITT
Copying functorch._C from functorch/functorch.so to /home/jan/pytorch/build/lib.linux-aarch64-cpython-38/functorch/_C.cpython-38-aarch64-linux-gnu.so
running install_lib
copying build/lib.linux-aarch64-cpython-38/torch/_C.cpython-38-aarch64-linux-gnu.so -> /usr/lib/python3.8/site-packages/torch
copying build/lib.linux-aarch64-cpython-38/torch/_export/db/examples/user_input_mutation.py -> /usr/lib/python3.8/site-packages/torch/_export/db/examples
copying build/lib.linux-aarch64-cpython-38/functorch/dim/magic_trace.py -> /usr/lib/python3.8/site-packages/functorch/dim
copying build/lib.linux-aarch64-cpython-38/torchgen/packaged/ATen/native/native_functions.yaml -> /usr/lib/python3.8/site-packages/torchgen/packaged/ATen/native
copying build/lib.linux-aarch64-cpython-38/torchgen/packaged/ATen/native/tags.yaml -> /usr/lib/python3.8/site-packages/torchgen/packaged/ATen/native
running install_egg_info
running egg_info
writing torch.egg-info/PKG-INFO
writing dependency_links to torch.egg-info/dependency_links.txt
writing entry points to torch.egg-info/entry_points.txt
writing requirements to torch.egg-info/requires.txt
writing top-level names to torch.egg-info/top_level.txt
reading manifest file 'torch.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*.o' found anywhere in distribution
warning: no previously-included files matching '*.dylib' found anywhere in distribution
warning: no previously-included files matching '*.swp' found anywhere in distribution
adding license file 'LICENSE'
adding license file 'NOTICE'
writing manifest file 'torch.egg-info/SOURCES.txt'
removing '/usr/lib/python3.8/site-packages/torch-2.2.0a0+git8ac9b20-py3.8.egg-info' (and everything under it)
Copying torch.egg-info to /usr/lib/python3.8/site-packages/torch-2.2.0a0+git8ac9b20-py3.8.egg-info
running install_scripts
Installing convert-caffe2-to-onnx script to /usr/bin
Installing convert-onnx-to-caffe2 script to /usr/bin
Installing torchrun script to /usr/bin

ok found solution
with this pytorch 2.2 jp5/cu114 index error is gone…
(its compiled with cuda 11.4)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.