I am experimenting with making devices that need to plug in and boot up on their own and launch an app/state reliably and quickly. Take for example if you were making something like a smart tv or a musical instrument with a nano - you’d need to be able to plug it in and have it up and running an application in somewhat less than ~10 seconds.
I’m finding a few individual roadblocks to this and I wonder if there is just an example of this working well I should start from?
Booting that quickly is rather unlikely (this is a full computer, not an embedded appliance…even the early boot stages which exist because of no BIOS will take a large chunk of that time prior to the Linux kernel even starting to load). You can improve boot time through a number of measures, e.g., disabling services you don’t need, setting network to static IP so it doesn’t wait for DHCP, so on. If you can live with a significantly longer boot time, then the rest is possible (though it might be complicated at times).
The gist of the whole thing is that people often mistake Xorg for a graphical desktop, and it is not that. Xorg is an X11 server, and that server is an interface to the video framebuffer and GPU. X only runs one program, and in almost every case that program is either the login manager (for authenticating), or else the desktop window manager (which is used for launching all kinds of graphics and producing a desktop). X can be told to run a single full screen application. Study startx.
Here is a spectrum of use cases I have in mind. Not necessarily looking for an in-depth analysis of these, but what kinds of things seem like a good use of the platform and which sound like pushing a silly rock uphill.
“App” is the modified nvidia sample code picking out faces from a camera, making REST call to a biometrics server and then twiddling some GPIO to actuate external access control device.
“App” is a small QT (thats what the cool kids use on linux, right?) UI that presents some buttons and invokes ttymidi to send/relay/mutate midi commands over usb.
“App” is a JACK Audio Connection Kit configuration that functions as a MIDI hub and VST (instrument and effects plugins) host.
“App” is Ardour. Basically wondering about making devices like: https://1010music.com/product/blackbox . In the musical device industry its mainly embedded and fpga used for this kind of stuff. Its certainly a stretch to expect a whole PC to be as fast and stable, but you could also offer a LOT more on a device if it could run general purpose software instead of being programmed to the hardware. On the other hand, jetson brings scale of cycles and memory to the table that most musical devices don’t touch right now.
“App” is skinned Kodi and some ML body pose code. Say for instance you wanna make a media player that you can wave your arms at.
If you write it yourself, this shouldn’t require an X session. You can run headless and run your app as a daemon. All you will need to do is Google a bit about systemd.
If written in QT, it will proably require X, so slower boot, however I know little about QT. I use gtk when I do a gui, but that requires a X as well. If the app you want really only does support X (say, a web browser), then linuxdev’s solution of launching it by bypassing all the desktop cruft is the an excellent one.
I have no experience writing a GUI from scratch or using any EGL based frameworks, but they do exist. The linked framework apparently can even run an X session in a window, which is mostly kind of pointless but also kind of cool.
The main issue is a Jetson Nano’s CPU isn’t particularly powerful and if you’re going to do those specific things they could be done as easily on a board with a less expensive GPU (unless some of those things have CUDA support).
Edit: if you want the GPU to process audio you could do that i suppose, but you would likely have to write a lot of it yourself, and it’ll be very low level code. I did find some examples googling, however. You’ll also have to rebuild the kernel to enable optimizations for this sort of realtime application. Please see more here. And here.
Difficulty Level = Very Hard
Also: “There is a bit of overhead because it copies the stream to video ram first, then processes the audio and copies it back to main ram, but the PCI-e bus is pretty fast so it’s still overall pretty fast” so it may not run as fast as something that runs on the CPU and fits in cache.
Edit 2: BUT: if you have an algorithm that is very well parallelized it, it may be possible to do it in CUDA, and in this case i suppose it might make a lot of sense to pay the tax of sending the data to the GPU, but there would be delay and you’re going to have to do the lowest of the low level code to get it working without latency issues.
Booting straight to Kodi is not a problem. Kodi supports egl if you build it in. I am not familiar with the plugin architecture for Kodi, but provided you can figure out the pose part, controlling kodi with that shouldn’t be a problem. I would wager people have already done it, so adapting to Nano should be easier.