I successfully ran the Qwen2.5-VL-3B-Instruct model using the steps from Jetson AI Stack – elinux.org.
However, when trying to load Qwen3-VL-2B-Instruct, I encountered the following error:
ValueError: The checkpoint you are trying to load has model type `qwen3_vl`
but Transformers does not recognize this architecture.
This could be because of an issue with the checkpoint,
or because your version of Transformers is out of date.
*** Please note that this reply is generated by LLM automatically ***
The error you’re encountering suggests that the model requires a specific version of PyTorch (1.12.1+cu113) and CUDA (11.6), which is not compatible with the current JetPack 6.2.1 setup on your Jetson AGX Orin 64GB.
The Qwen3VL GitHub page indeed specifies that the model requires PyTorch 1.12.1+cu113 and CUDA 11.6, which is not available on JetPack 6.2.1. The latest available PyTorch version for JetPack 6 is 1.12.1+cu126, which is not compatible with the model’s requirements.
There are a few possible solutions to this issue:
Wait for NVIDIA to release updated compatibility libraries or docker image: You can wait for NVIDIA to release an updated version of JetPack that includes the required PyTorch and CUDA versions. However, this may take some time, and there is no guarantee that the updated version will be compatible with your specific model.
Use a different model: If possible, you can try using a different model that is compatible with the current JetPack 6.2.1 setup. This may require modifying your application or workflow to accommodate the new model.
Build a custom PyTorch and CUDA environment: You can try building a custom PyTorch and CUDA environment on your Jetson AGX Orin that meets the model’s requirements. This will require significant technical expertise and may involve compiling PyTorch and CUDA from source.
Use a different AI framework: If the model is not specific to PyTorch, you can try using a different AI framework that is compatible with the current JetPack 6.2.1 setup.
In comparison, the Jetson AI Stack on elinux.org provides a more flexible and customizable environment for running AI models. However, it may require more technical expertise to set up and configure.
In summary, while there are possible solutions to this issue, they may require significant technical expertise or waiting for NVIDIA to release updated compatibility libraries or docker images.
*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***