/dev/* resource documentation

Hi,

I’m new with jetson nano & greengrassv1.

I’m trying to attach my local resources by following this tutorial. The list below summarizes the different resources I need to attach to greengrass.

  1. /dev/nvhost-prof-gpu resource type “Device”
  2. /dev/shm resource type “Volume”
  3. /dev/tmp resource type “Volume”
  4. /dev/nvhost-vi resource type “Device”
  5. /dev/nvhost-vic resource type “Device”
  6. /dev/nvhost-nvmap resource type “Device”
  7. /dev/nvhost-dbg-gpu resource type “Device”
  8. /dev/nvhost-gpu resource type “Device”
  9. /dev/nvhost-ctrl-gpu resource type “Device”
  10. /dev/tegra-dc-ctrl resource type “Device”
  11. /dev/nvmap resource type “Device”
  12. /dev/video0 or /dev/video1 if you use USB camera

Additional file access permissions to the Lambda function are supposed to be given automatically. Likewise permission with a read and write access should be provided based on the Greengrass developer guide.

Issue: When I go back to my terminal and run ls in /dev I cannot find nvhost-nvmap resource type “Device” but there is a resource called nvmap

So I was wondering if there is any documentation I can read regarding the different resources under /dev so that I can understand what that the nvhost-nvmap resource is supposed to be?

I do not have the knowledge to tell you about specific files in your list, but each file you find in “/dev” is a “pseudo” file, and not a real file on the filesystem. These are instead in RAM, and are an interface with a driver. If the kernel driver is not there, then the file won’t exist.

As an example, if USB detects a generic camera device, and then broadcasts what the device is, then the driver for that type of device will recognize this and take ownership of that camera. The “udev” system gives them the numbering in order detected. So the first camera detected produces “/dev/video0”, and unloading the driver deletes that file until the driver is loaded again.

Obviously the “/dev/nvhost-*” files are NVIDIA drivers, but I have no idea which drivers are used for those files. Someone else may be able to say what driver has to be installed for “nvhost-nvmap” to exist.

In some cases you’ll find documentation from third parties assumes a desktop PC architecture, and not all device access functions will always apply on a Jetson. The most common example is that a desktop PC uses PCI to talk to the video card, and NVIDIA has a program, “nvidia-smi”; that program finds and manages GPU devices through the PCI bus (discrete GPUS, or dGPU), but a Jetson’s GPU is integrated directly with the memory controller (an iGPU), and thus there is no possibility of finding “nvidia-smi” on a Jetson. I mention this because I have no idea what the requirements are for “nvhost-nvmap” to exist, and it is possible that such a device might work on a Jetson, but might also be undefined. Don’t know.

Hi,
These are nodes for accessing hardware blocks. We don’t have document to describe the nodes in detail. Fr checking if there are missing nodes, please refer to

etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv

The file lists required nodes for launching a docker. You may refer to it.

The README in the github mentions SD card image but does not specify which version is used. May check with the author about this first.