Jetson Nano as LinuxCNC driver

Hi,

This module looks like a good candidate for LinuxCNC - well maybe. Is there guidance available on whether the Nano module’s off-the-shelf GPU driver is compatible with the preempt_rt kernel patch?

Separately, is there a C-code example available for DMA access to the GPIO registers?

I have received my developer kit from Sparkfun already, but I’m waiting for a large USB3.0 SSD.

Hi tim.strommen,

Regarding the preempt_rt patch, We haven’t tested on Nano yet, we’re investigating if able to support it.

Thanks for the reply kayccc, I am looking forward to the answer to that question.

Given that the Jetson Nano is advertised as a robotics development platform, it would be good to see some “real-time” support which is required for good motion control safety. Supporting the preempt_rt patch also has the side effect of making one’s code base more stable and predictable (this is the effect it had on the Linux core kernel), so it’s a good candidate to add checks for it to any driver development efforts.

Lingering unaddressed is the other question about GPIO DMA - I have not seen anything in the code examples for handling GPIO control with DMA, this practically is just as important for the purposes of using GPIO for motion control in the >10’s of kHz range.

I am cross-linking the LinuxCNC thread that was started to track Jetson Nano effort with LinuxCNC (Link Here).

Again, thanks to you and Nvidia for the continued attention to these questions.

hello tim.strommen,

regarding to sample code for access GPIO.
you might also check Topic 1029697 and Topic 1030443 for reference.
thanks

Thank you for that useful reply JerryChang,

One thing I want to point out is that the example referenced above is only operating in user space (application-level). This is not DMA access to the GPIO registers in Kernel space (custom driver level).

As I have specified, this target for this request is CNC machine control. As such a very short path from physical IO state to code is REQUIRED, and it must be allowed to operate with a higher system interrupt level than a user application - the user applications will talk to this driver I will have to write.

As an example of what specifically I’m requesting, when hardware is set up after boot, the mode of what the GPIO pin does is set somewhere in a function block mux (SPI, I2C, PWM, etc…) - when set as a GPIO rather than a special function, there will be registers that define whether pull-ups and pull-downs are used, whether edge detection is used (if the hardware supports it), whether the pin is configured as a driven output, what state the output should be set to (high/low), and then a register to read the measured level of the pin (when the pin is not actively driven by the Tegra chip, it is effectively an input and the level reflects what external circuitry is doing to the pin).

Normally in hardware this will be a collection of memory registers. For example with the RaspberryPi, a C-library is publically available that can access these registers directly (here is a link to that author’s page discussing it). This C-file can be included in other user-space applications for improved speed (>20MHz GPIO toggle rates at driver-level) and lower latency (<1uS delays from command to execute/response in meat-space).

This low level is the same space we need to be able to access in order to suitably use the GPIO for high-speed IO required in CNC and Robotic Motion Control. Effectively we need to be able to write a driver that operates in kernel space directly on the IO (GPIO Direct Memory Access [DMA]), without passing through a sysfs abstraction that is provided in the Jetson examples.

Imagine if you will about how long you want your delay to be from when you push an E-STOP button to when software is aware of it, or if a 2,000kg gantry is moving to the end of its travel and activates a limit switch - we don’t want that to crash, as it could break things or hurt/kill people.

Is there a register programming manual available for the Jetson DevKit for developers? Usually this will have a base address register (BAR) function provided by the kernel to where the memory for GPIO has been allocated/randomized (if the hardware does that), and then a register offset for each of the GPIO function registers (mux, special mode, pull-ups, pull downs, output active, output state, measured value, etc…).

Thank you and Nvidia for your continued insight and replies

hello tim.strommen,

FYI,
there are kernel functions that support to control GPIOs.

<i>$TOP/kernel_src/kernel/kernel-4.4/arch/sh/include/asm/gpio.h</i>

static inline int gpio_get_value(unsigned gpio)
{
 	return __gpio_get_value(gpio);
}
  
static inline void gpio_set_value(unsigned gpio, int value)
{
  	__gpio_set_value(gpio, value);
}

please also refer to TX2 Configuring Pinmux GPIO and PAD chapter to have board customization.
thanks

Thanks for the continued replies JerryChang,

 It looks like we are dancing around the answer here a bit - perhaps I have enough to move along with the GPIO issue.  The function you referenced above is a kernel-space abstraction for use in user space - the funciton __gpio_set_value(); exists where we want to have IO access - this function above referenced will only set one GPIO pin at a time.  If we want to set up 8-bits of output, 3-bits of address, a strobe and read 8-bits of input, this would require 21 separate calls to this function.  With DMA, this can be done in about 4 (a 5x speedup) as we are writing/reading entire DWORDs rather than masking a single bit after a read and writing back to the IO register.  For driver-like code as a general industry best-practice, we want to occupy the CPU as short an interval as possible, allowing for other code to run on a CPU core for as many more cycles as a programmer can get it.

Again that may be enough for me to follow down the rabbit-hole to see what source is actually being used on which registers. Tentatively that will pause my query regarding the GPIO, I am still interested in the preempt_rt kernel patch support previously discussed in this thread.

Thanks to Nvidia and you for the continued responses.

1 Like

Bump due to preempt_rt still pending resolution.

Preempt_rt patch thread (here)

Nvidia seems to be a bit weak on managing the announcements of when they resolve customers blockers, but in the end it’s working its way through the grapevine. It seems that PREEMPT_RT is supported in the latest DevKit. See the posts starting with this one…