We are developing an app which is HTML based, and where we want to be able to “monitor” the status of inference apps running in the background (the inference apps are being used as sensors to count widgets of a certain type). The app runs locally on the nano - both web server and client (chromium browser in kiosk mode). We are determining the best way to integrate the inference display output with the web app, and I’m looking for any guidance from this forum given the level of experience and knowledge here is very high.
Our inference apps are developed based on the jetson-inference / hello AI world demos, and we have no problem getting them to run as a standalone app on the display. However, to integrate them into a web interface, we have come up with a few different approaches:
Use XPRA / VirtualGL to set up a second virtual display, and display the html content in the browser; this appears to be a very heavyweight solution as it requires a second X server to be running, OpenGL emulation and a whole host of supporting apps and modules. Also it appears to lack support for EGL.
Develop a custom interface using electron (or similar) with web rendering capability and integrate the inference apps into this application. Potential challenges here are that we want inference apps running constantly (regardless of monitoring status) and thus need the ability to simply turn on or off the display output.
Use an HDMI dummy plug to run a second display that allows for GPU rendering, and use a VNC-to-html server such as noVNC to render the output of the second dummy display in the browser.
Our initial assessment of these options is that the first doesn’t work, the second requires a significant custom coding effort and the third would be simplest but somewhat of a hack. Am curious if there are any other thoughts or perspectives, or if anyone has done something similar to this in the past.