Txt2kg Playbook ./start.sh --complete does not start Additional Services (Complete Stack):

Following will not start

Additional Services (Complete Stack):
• Local Pinecone: http://localhost:5081
• Sentence Transformers: http://localhost:8000
• vLLM API: http://localhost:8001

remzi@sparkai:~/dgx-spark-playbooks/nvidia/txt2kg/assets$ ./start.sh --complete
Checking for GPU support…
✓ NVIDIA GPU detected
GPU: NVIDIA GB10, [N/A]
Using Docker Compose V2
Using complete stack (Ollama, vLLM, Pinecone, Sentence Transformers)…

Starting services…
Running: docker compose -f /home/remzi/dgx-spark-playbooks/nvidia/txt2kg/assets/deploy/compose/docker-compose.complete.yml up -d
[+] Running 7/7
✔ Container ollama-compose Running 0.0s
✔ Container vllm-service Started 0.0s
✔ Container compose-arangodb-1 Started 0.1s
✔ Container entity-embeddings Started 0.0s
✔ Container compose-sentence-transformers-1 Started 0.1s
✔ Container compose-arangodb-init-1 Started 0.1s
✔ Container compose-app-1 Started 0.2s

==========================================

txt2kg is now running!

Core Services:
• Web UI: http://localhost:3001
• ArangoDB: http://localhost:8529
• Ollama API: http://localhost:11434

Additional Services (Complete Stack):
• Local Pinecone: http://localhost:5081
• Sentence Transformers: http://localhost:8000
• vLLM API: http://localhost:8001

Next steps:

  1. Pull an Ollama model (if not already done):
    docker exec ollama-compose ollama pull llama3.1:8b

  2. Open http://localhost:3001 in your browser

  3. Upload documents and start building your knowledge graph!

Other options:
• Run frontend in dev mode: ./start.sh --dev-frontend
• Use complete stack: ./start.sh --complete
• View logs: docker compose logs -f

1 Like

If you run docker ps in the command line can you see the containers running?

1 Like

remzi@sparkai:~/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/txt2kg/assets$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d513d9ffa0ea compose-app “docker-entrypoint.s…” 6 hours ago Up 3 hours 0.0.0.0:3001->3000/tcp, [::]:3001->3000/tcp compose-app-1
e398d0a7150b compose-vllm “/opt/nvidia/nvidia_…” 6 hours ago Restarting (1) 1 second ago vllm-service
37567d7e61a1 compose-sentence-transformers “gunicorn --bind 0.0…” 6 hours ago Up 3 hours 0.0.0.0:8000->80/tcp, [::]:8000->80/tcp compose-sentence-transformers-1
209d2704645c Package pinecone-index · GitHub “/engine” 6 hours ago Restarting (255) 8 seconds ago entity-embeddings
47af05ab9967 ollama-custom:latest “/entrypoint.sh” 5 days ago Up 3 hours (unhealthy) 0.0.0.0:11434->11434/tcp, [::]:11434->11434/tcp ollama-compose
8b17c523f2c4 arangodb:latest “/entrypoint.sh aran…” 5 days ago Up 3 hours 0.0.0.0:8529->8529/tcp, [::]:8529->8529/tcp compose-arangodb-1

That is all I have it running

1 Like

The complete stack is currently not supported but planned for a future release

1 Like

Thank you

I better stop them they are continuously restarting

209d2704645c Package pinecone-index · GitHub “/engine” 6 hours ago Restarting (255) 8 seconds ago

e398d0a7150b compose-vllm “/opt/nvidia/nvidia_…” 6 hours ago Restarting (1) 1 second ago

1 Like

What about using pinecorn DB on AWS Thank you

Yes that should work

Hi @cirit , did you get pinecone to play nice? I just ran the startup, most comes up, pinecone keeps rebooting. Most what I found so far is “exec /engine: exec format error” so it appears the pinecone container is not for ARM, would be weird to have it in the DGX/Arm playbook right?

Please post a reply if you have a fix for rebutu

no luck despite all values are correct. it still fails

OK Thanks. Never mind, there just is no pinecone Docker for ARM and no progress.

The TXT2KG playbook is not compatible with DGX, shame it is in the DGX playbooks repo. Maybe it can work with another vector database, will look into it.

1 Like

Yes and a week ago Nvidia put out a YouTube video showing it off. Wonder how they did it?