Error: 'No displayable error message found for error code 3237093926.'

Hi there,

I’m working on streaming the USD Composer app using the latest kit-app-template (v.106) and the web-viewer-sample code.

The client is able to connect to the initial page, but when trying to move to the next one, it seems to get stuck in a retry loop.


Here is the error message:

Is there something I might have missed or a setting I should check to fix this? Any advice would be appreciated!

Thank you

It looks like you are having a security permissions problem connecting to one of our java scripts. Are you on a corp network with corp security? The next page, has to establish a live connection to that code. Maybe you have an internal firewall, or network or browser security preventing you from running the script required.

I’m having the same problem.

I’m testing by deploying on an Azure VM, I’ve lowered the firewall, as well as the one on my computer, and I still get the same error.

I also deployed a pod in k8s, I deployed a service to set up a load balancer and I have the same error.

Have a look at the Network exchange tab in the Chrome’s developer’s console while you navigate this menu and see if and where the http calls fail.

I share more information.

Here I have my two pods (web) and (kit-app)

This is my kubernetes service: This will bring up a load balancer.

This is the load balancer in azure:

chrome error:

dev tool:

console dev tool:

console dev tool

thanks!

Please check the Chrome’s developer’s console and the network tab there, not the k8s service.

Sorry, I already updated the message

A few questions:

  1. Did you deploy OVAS? If not, how do you run your k8s pods with Kit containers on demand?
  2. If you deployed OVAS, did you go past “Include Web UI dialog” and select the application, version and profile?
  3. Do you have communication from the machine you open the Web dialog from to http://<the URL/IP you masked>:49100/sign_in as in can you curl it?
  1. Containers on demand
  2. No
  3. I don’t get any dialogue, it just answers what you see (empty)
    curl

I created it from the project, then I containerized it

I have created it from the project, followed all the steps and then containerized with the containerize option.

I created the manifests manually in both.(k8s)

The request works.(from my machine to the load balancer IP)
wscat -c ws://ip-server:49100/sign_in?peer_id=peer-272772&version=2

note:

Ahhh… what a minute here. You are trying to stream from Azure VM, and then view that stream on your local desktop, through the public internet? No, that will not work. You cannot cross domains like that, ports open or not. The streaming sample is NOT designed to cross domain policies. No where in your previous posts did you mention that you are trying to stream through the open public internet.

The streaming “sample” is designed as a “local only” sample. Your laptop to your desktop, your desktop to your laptop. Your enterprise workstation, down to the boardroom Monitor. A central RTX super cluster to a lab workstation. Anywhere you are in a closed loop network. Even if you run it on K8s, you are only streaming to a machine WITHIN the closed network, and then for viewing remotely, across the open internet, you would need to use standard remote viewing software, that IS designed to cross public domains. Like remote desktop.

In your case, you are using an Azure VM, correct? So you would simply use “Azure Monitor” that is free with Microsoft Azure to view your “local” stream running in the VM coming into the VM web browser, as you show it. That is why it works with Remote Desktop. It is working, but it is not designed to “get out” of the VM loop.

If you want a full remote streaming solution, you will need to deploy an app to our GDN (Game Developer Network) for proper wide remote viewing from a central RTX server.

1 Like

Thank you for your response.

Currently, I am behind a firewall but using wildcard permissions. In my case, the entire environment is deployed locally, without the involvement of VMs or cloud services. I have both the web-service-kit and the kit-app service running within the same local network, and I am attempting to use only HTTP connection to allow a public client to access it .

As mentioned earlier, the sample code is intended for use within a local network. Does this mean the NodeGroup -GPU on the right side of the diagram needs to be in the same local network? Please feel free to correct me if I’m mistaken.

Thanks again for your help

Could you guide me on how I could deploy the product in a Kubernetes (AKS) environment with public access?

With OVAS (Overview — Kit App Streaming API) could I achieve this? However, where could I have the web application for transmission? In the diagram (https://docs.omniverse.nvidia.com/ovas/latest/_images/ovas_arch_azure.png)
I see where it would be deployed.

@andresfelipecatanog

Azure Monitor - Modern Observability Tools | Microsoft Azure

@ziro I would assume that ALL network components need to be in the “local” group yes. Especially the GPU.

If we set it to “stream” instead of “local” in the webview-sample code, would public clients still be unable to access it?

We have deployed the Kit app streaming on Azure Kubernetes Service (AKS) using OVAS .

We’ve tried everything, but we are encountering the same error as @andresfelipecatanog.

OVAS installation on AKS in Azure was completed.

The installation and components were successfully deployed.

The available applications are consulted from the endpoint and the list of applications is displayed.

However, when starting the transmission, it remains in a loop waiting for it to start:

Waiting for session dc75697d-059d-4473-b853-a3b31f7eaa5f to be ready… Last checked at

Below I show some evidence.

Here we can see how invoking the endpoint shows the deployed application:

The sample application is run from a local environment for transmission:

API and streaming endpoints are configured:

The application is selected (usd-viewer):

The application is selected

The profile is selected

then start the transmisión, At this point the application is stuck in a loop waiting, the last message is repeated indefinitely.

Here we see the consumptions and no errors are detected

However, in the requests, even though a 202 is seen, the status returns it as false.

If I check the streams, I can see that they have been created.

The pods look in good health

Hi there,
is the Kit Application Pod getting scheduled on Kubernetes and running?
I do not see it in your screenshot.

You can check with “kubectl get helmrelease” and “kubectl get pod” if the actual Kit Application Pod is actually starting up and running.

Hi Jathavan,
thanks for your feedback, I work with Andres on that one…
We had a pb with the app pod due to misconfiguration of the nvcr repo. now the pod is ok

We are now with the azure load balancing configuration. We saw there is a specific component for AWS (the NLB manager) to re-bind the balancer. Is there something equivalent or do you recommand something for Azure ?

Thanks again !
Yoann

There is no equivalent for Azure or a recommendation.
What are you looking for exactly? The stack should work with Azure and without the NLB Manager (it was written for AWS only).

We see that when we create the transmission, it is creating the deployment and the service in kubernetes, however we see that the service is of nodePort type.
Is it correct that it is created as NodePort type?

The diagram shows that it should be a loadbalancer, how can I control that a “loadbalancer” type is configured in the service.?

Aqui muestro la respuesta con los datos de la transmision, el host que me asigna, los puertos externos 49100 tcp, 1024 udp y 8080 tcp y adicionalmente los puertos que abren internamente en el workernode.

Therefore we have created an additional load balancer, which is responsible for performing the “manual” mapping of the open ports in the workernode.

I hope you can help us, since we have not been able to perform the transmission.