I’m wondering what the stereo_dummy sample application is supposed to look like once it is running. Is there a video of what it should look like. At the moment all I have on my screen in the chrome window localhost:3000 is a sensor window with a wavy brown/red pattern
Starting to feel left behind here.
I am looking for an Nvidia resource to assist here.
Thanks for your patience.
The sample of “stereo_dummy” has 2 codelets as shown in stereo_dummy.app.json: CameraGenerator and DepthCameraViewer.
The CameraGenerator publishes make-up depth sensor data while DepthCameraViewer visualizes these data on sight (the brown red pattern you saw).
So yes it seems to be working as expected on your setup. You might want to try simulation or real sensor next.