Nvidia VLM Workflow Issues

We’re having an issue while running the VLM workflow

We’re able to add a camera and generate alerts on the mobile app, but while adding a second camera we are getting an error saying ‘Max WebRTC connections reached’

Also the ROI feature fails and says ‘Failed to set ROI’

Are there any settings we could try for this?

Can you update “max_webrtc_out_connections”: 1, in /opt/nvidia/jetson-1.1.0/services/vst/config/vst_config.json to the prefer number?
Here is the guide on JPS VLM. Why you need set ROI for VLM workflow?

Thanks, updating “max_webrtc_out_connections”: 1, in /opt/nvidia/jetson-1.1.0/services/vst/config/vst_config.json

But on the web page 0.0.0.0:30080/vst - we get Bad Gateway error

Any idea why ? The VST service on the jeston orin is running without issues

image

Please ignore above “Bad Gateway” error

We want to setup AI NVR with Trip Wire, can you point us to the steps for this please ?

We set the “analytic_server_address”: “http://21.6.45.236:30080/emdx"

which is the jetson IP and port… but the tripwire fails still

Here is the setup guide for AI NVR: AI NVR — Metropolis on Jetson documentation 0.1.0 documentation
Trip Wire and ROI works fine without any changes for JPS release in my side.

Thanks. We will have a look at this

One more issue we are facing - after adding 4 cameras on VST, on the mobile app we get an error saying - Invalid VLM stream ID whenever we use the chat option to interact with the VLM

And on the docker logs for VLM, we see that there is an error saying - max retries exceeded for URL - /api/v1/chat-completion

Any idea why this happens ?

Can you share the log? You can refer the source code of VLM here: jetson-platform-services/inference/vlm at main · NVIDIA-AI-IOT/jetson-platform-services · GitHub

This is the error we get when using the chat feature on the mobile app

Hi @kesong do you know why this could be happening ?

Can VLM work if only add one camera?

It fails with 1 stream as well

Error on jps_vlm container : HTTPConnectionPool(host=‘0.0.0.0’, port=5015): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0xffff8a5c7700>: Failed to establish a new connection: [Errno 111] Connection refused’))

Can you chat with curl instead of the mobile app? Please refer here for chat with curl: Visual Language Models (VLM) with Jetson Platform Services — Metropolis on Jetson documentation 0.1.0 documentation

Yes I got the problem.

It is working when I stop AI-NVR containers using sudo docker compose -f compose_nano.yaml down --remove-orphans

But VLM gives error saying - HTTPConnectionPool(host=‘0.0.0.0’, port=5015): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0xffff8a5c7700>: Failed to establish a new connection: [Errno 111] Connection refused’))

even with curl, when AI-NVR containers are running

Do you think there is some port forwarding / port blocking issue ?

Also the analytics button on Mobile app is not clickable, but on web UI it is working

Any idea why? Screenshots below


@kesong can you please help with the above screenshots?

It will be a great help to us. Thank you for all the support

Here is the guide for mobile app. Can you refer here for Analytics: AI-NVR Mobile Application User Guide — Metropolis on Jetson documentation 0.1.0 documentation?

Yes we saw this, but we are not able to click on the analytics button. We are able to see all cameras but the analytics button is greyed out.

Which workflows are you running? Analytics is for AI NVR as AI NVR will output Analytics such as trip wire/ROI counting. Seems you are running VLM, so only chat and alert button are green. You can refer the images in here: AI-NVR Mobile Application User Guide — Metropolis on Jetson documentation 0.1.0 documentation (nvidia.com)

Thank you @kesong

How do I enable AI NVR on the mobile app? On web UI it is working, I am able to add Tripwire

And AI NVR containers are running on the jetson device as well. Should I stop the VLM container?