Which docker nano_llm:ros with Jetpack 6.1 L4T 36.4.0

I have tried to implement ros2_jetbot_tools . I have used docker dustynv/nanollm:humble-36.3.0 because it is not available 36.4.0 and docker jetbot_nano_llm.

When I launch ros2 run jetbot_tools llm_vision_agent
I get this error:
the provided PTX was compiled with an unsupported toolchain.
Fatal Python error: Aborted

I

Hi @flogarcia999

Can you provide the detailed information about the followings?

  • your ros2_jetbot_tools
  • jetbot_tools
  • llm_vision_agent

Thanks for your reply

I have cloned this repository GitHub - Jen-Hung-Ho/ros2_jetbot_voice: Jetbot Voice to Action Tools is a set of ROS2 nodes that utilize the Jetson Automatic Speech Recognition (ASR) deep learning interface library for NVIDIA Jetson and I have followed the instructions to

build the docker jetbot_riva_voice latest
with .init.sh program.

In the docker I execute:

ros2 run jetbot_riva_voice jetbot_ASR --ros-args --params-file /ros2_ws/src/param/jetbot_voice_params.yaml` it is OK

ros2 run jetbot_riva_voice jetbot_ASR --ros-args --params-file /ros2_ws/src/param/jetbot_voice_params.yaml`
it is OK

ros2 run jetbot_riva_voice jetbot_TTS --ros-args --params-file /ros2_ws/src/param/jetbot_voice_params.yaml -p index:=11 I get the error ‘NoneType’ object has no attribute 'items in this part of this part of the python program
responses = self.tts_service.synthesize_online(
msg_str, None, “en-US”, sample_rate_hz=self.sample_rate,
audio_prompt_file=None, quality=20

without using docker client and ROS, the Riva ASR and TTS work perfectly

I have cloned ros_jetboot_tools of the repository GitHub - Jen-Hung-Ho/ros2_jetbot_tools: Jetbot tools is a set of ROS2 nodes that utilize the Jetson inference DNN vision library for NVIDIA Jetson
and have built he docker jetbot_nano_llm:latest with the init.sh program.

When I execute in the docker
ros2 launch jetbot_tools detect_copilot.launch.py param_file:=./jetbot_tools/param/detect_toys_copilot_params.yaml it is OK

ros2 run jetbot_tools llm_chat_agent it is OK

ros2 run jetbot_tools llm_vision_agent not OK.
If I use dustynv/nanollm docker the program work OK with the command```
jetson-containers run $(autotag nano_llm)
python3 -m nano_llm.agents.video_query --api=mlc
–model Efficient-Large-Model/VILA1.5-3b
–max-context-len 256
–max-new-tokens 32
–video-input /dev/video0
–video-output webrtc://@:8554/output

In the docker
ros2 launch jetbot_tools jetbot_tools_voice.launch.py param_file:=./jetbot_tools/param/ it is OK

When i say “go forward”, or “turn left”, the robot go to the left side but it doesnt talk because TTS not work and not answer to question as “What do you see in the image?” because nanollm vision is down