Issue with launching web server: Tutorial - text-generation-webui

Need troubleshooting assistance, please, with Tutorial - text-generation-webui.

Steps performed:

  1. Performed RAM Optimization:
    2. Disabled desktop GUI temporarily
    3. Have swap file on SSD (at boot)
  2. Ran jetson-containers as follows:
$ jetson-containers run $(autotag text-generation-webui)
Namespace(packages=['text-generation-webui'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.3  JETPACK_VERSION=6.2  CUDA_VERSION=12.6
-- Finding compatible container image for ['text-generation-webui']
text-generation-webui:r36.4.3-transformers
V4L2_DEVICES:  --device /dev/video0  --device /dev/video1 
+ docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /ssd/projects/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb --device /dev/video0 --device /dev/video1 --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-7 --device /dev/i2c-9 --name jetson_container_20250318_091132 text-generation-webui:r36.4.3-transformers
root@jetson2:/#
  1. Ran web server launch as follows:
# cd /opt/text-generation-webui && python3 server.py \
  --model-dir=/data/models/text-generation-webui \
  --chat \
  --listen
bash: cd: /opt/text-generation-webui: No such file or directory
root@jetson2:/#

Since steps 4 & 5 were run with cut-paste operations from the referenced tutorial, I am assuming that I didn’t mess up any keystroke.

The /opt folder has only one sub-folder: nvidia which it self has only nsight-compute

Can get text-generation-webui up and running with the documentation at the official site but Chat is totally unresponsive even with the smaller models.

What should I change in step 5 to fix the issue with the web server using the Jetson tutorial instructions? Thanks.

Regards.

Hi,

It looks like the image is not fully built.
The image name should be text-generation-webui:r36.4.3.
text-generation-webui:r36.4.3-transformers is one of the dependencies for building the text-generation-webui image.

Could you run the below command to build the container first?

$ jetson-containers build $(autotag text-generation-webui)

Thanks.

Here is the output from your suggested command line entry:

$ jetson-containers build $(autotag text-generation-webui)
Namespace(packages=['text-generation-webui'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.3  JETPACK_VERSION=6.2  CUDA_VERSION=12.6
-- Finding compatible container image for ['text-generation-webui']
text-generation-webui:r36.4.3-transformers

Namespace(packages=['text-generation-webui:r36.4.3-transformers'], name='', base='', multiple=False, build_flags='', build_args='', use_proxy=False, package_dirs=[''], list_packages=False, show_packages=False, skip_packages=[''], skip_errors=False, skip_tests=[''], test_only=[''], simulate=False, push='', logs='', verbose=False, no_github_api=False)

-- L4T_VERSION=36.4.3 JETPACK_VERSION=6.2 CUDA_VERSION=12.6 PYTHON_VERSION=3.10 LSB_RELEASE=22.04 (jammy)
-- jetson-containers text-generation-webui:r36.4.3-transformers

-- Copying /etc/nv_tegra_release to /ssd/projects/jetson-containers/packages/llm/ollama/nv_tegra_release
Failed to fetch version information. Status code: 404
Failed to fetch version information. Status code: 404
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/ssd/projects/jetson-containers/jetson_containers/build.py", line 120, in <module>
    build_container(args.name, args.packages, args.base, args.build_flags, args.build_args, args.simulate, args.skip_tests, args.test_only, args.push, args.no_github_api, args.skip_packages)
  File "/ssd/projects/jetson-containers/jetson_containers/container.py", line 70, in build_container
    packages = resolve_dependencies(packages, skip_packages=skip_packages)
  File "/ssd/projects/jetson-containers/jetson_containers/packages.py", line 346, in resolve_dependencies
    packages, changed = add_depends(packages)
  File "/ssd/projects/jetson-containers/jetson_containers/packages.py", line 317, in add_depends
    for dependency in find_package(package).get('depends', []):
  File "/ssd/projects/jetson-containers/jetson_containers/packages.py", line 201, in find_package
    raise KeyError(f"couldn't find package:  {package}")
KeyError: "couldn't find package:  text-generation-webui:r36.4.3-transformers"
$

There was an error displayed in the terminal window as shown above.

I cloned jetson-containers on Mar 10 and built it locally. I did a pull origin a few minutes ago and ran the install.sh script. Then I ran jetson-containers with the build option per your suggestion. The error remained. Awaiting your kind suggestions on what to do next. Thanks.

Regards.

Hi,

It looks like jetson-container still picks up the text-generation-webui:r36.4.3-transformers image.
Could you try to remove the autotag to see if it helps?

$ jetson-containers build text-generation-webui

If the tool doesn’t start to build the following container, could you remove all the text-generation-webui:* images and try it again?

Thanks.

Thanks again.

The jetson-containers command without the autotag didn’t work:

KeyError: "couldn't find package:  llama_cpp"

So I tried your second suggestion with:

docker images | grep "text-generation-webui*" | awk '{print $3}'docker images | xargs docker rmi

A whole lot of the images were deleted but I couldn’t delete a few more owing to the following message:

Error response from daemon: conflict: unable to delete b298da4822a7 (cannot be forced) - image has dependent child images

So I used id b298da4822a7 as an example (there were more than a dozen similar entries, possibly owing to my attempts with the original command using jetson-containers with autotag) with the following command:

$ docker inspect --format='{{.Id}} {{.Parent}}' $(docker images -a -q) | grep b298da4822a7
sha256:f2ed955fb610991cfa529e5bb627094863d9417e3f70952f804439a65f1ed346 
sha256:b298da4822a733aa94757b4a4fa41a9f84f0c28f084a2c9299b8d4170931fcee
sha256:b298da4822a733aa94757b4a4fa41a9f84f0c28f084a2c9299b8d4170931fcee 
sha256:1616b473caea1d8dcced3a1c3c4a520a055692ced8819bf62bd7bf5b59c877ba

How do I determine the image ID of the four child images from the four lines above? (sorry for a newbie question, but you can understand my level!)? Thanks.

Regards.

Hello @AastaLLL,

I did a brute force removal all Docker containers and then ran the jetson-containers command with build parameter and without autotag.

Two error messages (previously encountered too) cropped up:

 -- Copying /etc/nv_tegra_release to /ssd/projects/jetson-containers/packages/llm/ollama/nv_tegra_release
Failed to fetch version information. Status code: 404

and

  File "/ssd/projects/jetson-containers/jetson_containers/packages.py", line 210, in find_package
    raise KeyError(f"couldn't find package:  {package}")
KeyError: "couldn't find package:  llama_cpp"
-- Error:  return code 1

Perhaps, I am running an incorrect version of the script? Thanks.

Regards.