auvidea J120 Support?

I just got my J120 carrier board for the TX1 from auvidea:

I’m loving it so far, except that the fan is always on at full speed. I also have questions about drivers for the SSD slot, and camera compatibility…

Jurgen Stelbrink, or some other employee from auvidea – are you lurking here? It doesn’t seem that auvidea has its own support forum. There is a thread on DIY Drones about the J120, but that doesn’t seem appropriate for general support. Could we set up a subtopic here on the Nvidia TX1 support site?



  1. The fan always seems to run at full speed, and not as needed (which was rarely on the DevKit motherboard). Is there a way to make the fan operate only as required on the J120?

  2. I bought a SAMSUNG 950 PRO M.2 256GB PCI-Express 3.0 x4 Internal Solid State Drive (SSD), and hope to get it working on the J120. It doesn’t seem to work with the L4T R23.2 Release. Is there a patch, a kernel image, or any documentation on getting NVMe SSDs working on the J120?

  3. The J120 Technical Reference Manual, rev 1.3, mentions that the CSI-EF (J14) connector “has the same pinout as the CSI-2 connector on the Raspberry Pi compute module carrier board”. Does this mean that we can use the Raspberry Pi cameras with the J120 (assuming appropriate drivers)?


It would be really great if you sold a little adapter board and cable that let us re-use the camera from the TX1 dev kit.


Send him direct email Jurgen, is very quick with reply

Hi tonyvr,

if you like, please post your questions here. I guess they could be of general interest.

Fan: yes, on the J120 we control the fan slightly differently than it is controlled on the dev kit. This makes the fan run at full speed on the J120. The dev kit has an I2C port expander (U29 - P04) which controls the fan disable signal called PS_VDD_FAN_DISABLE. When high the FAN is disabled, when low it is enabled. On the J120 there is no I2C port expander, so this signal (FAN DISABLE) is connected to GPIO19 (pin F2 of the TX1). We need to check how the software could be reconfigured.

Regards, Jurgen

Some infos on using the M.2 type M slot on the J120:

In order to use a M.2 type M (4x PCIe) SSD on the J120 with the Jetson TX1 module, the NVMe kernel driver needs to be compiled into the kernel. This driver is integrated into most mainstream Linux distributions, but sadly not the L4T Ubuntu. It would be great if NVIDIA would include this driver included in a future release of L4T. The driver ist called „CONFIG_BLK_DEV_NVME“ (Device drivers > Block devices > NVM Express block devices).

In case you did not compile the kernel already, keep in mind that the kernel on the Jetson is 64 bit but the user space is 32 bit, therefore I would propose to cross compile the kernel on an external host.

We used a 64bit Ubuntu 14.04 host with Linaro binary release 5.2 ( for both arm (gnueabihf) and arm64 (aarch64-linux-gnu). You can prime the kernel config from /proc/config.gz to guarantee compatibility.

We plan to post such a kernel shortly.

Regards, Jurgen

Thanks very much for the information!

I added the questions from my direct email to the original post above.

The J120 board is a work of art! I think I’m going to have a lot of fun with mine.

Regards, tonyvr

Hi Jurgen,

I seem to have successfully rebuilt and flashed a new kernel with
but I still can’t see a M.2 type M (4x PCIe) SSD on the J120 (at least I see no new device using gparted).

Is there anything else I need to do to see the new SSD?


Never mind – I found it as /dev/nvme0n1, and was able to create a partition on it using fdisk:

sudo fdisk /dev/nvme0n1

and format the partition using mkfs:

sudo mkfs.ext4 /dev/nvme0n1p1

It then shows up and seems to be usable (by root). I should be able to add it to the fstab with enough flailing…

It still doesn’t show up using gparted. Why? (I’m a linux noob, obviously).


Just guessing, the naming via nvme0n1p1 is probably not looked for by general block device queries (the solid state memory naming such as that is relatively new). I’m curious if you see anything specifically naming nvme0n1p1:

lsblk /dev/nvme*
fdisk -l /dev/nvme*

This is what I get…

ubuntu@tegra-ubuntu:~$ lsblk /dev/nvme*
lsblk: /dev/nvme0: not a block device
nvme0n1     253:0    0 238.5G  0 disk 
└─nvme0n1p1 253:1    0 238.5G  0 part 
nvme0n1p1   253:1    0 238.5G  0 part


ubuntu@tegra-ubuntu:~$ sudo fdisk -l /dev/nvme*
last_lba(): I don't know how to handle files with mode 20600

Disk /dev/nvme0n1: 256.1 GB, 256060514304 bytes
234 heads, 63 sectors/track, 33924 cylinders, total 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f1415

        Device Boot      Start         End      Blocks   Id  System
/dev/nvme0n1p1            2048   500118191   250058072   83  Linux

Disk /dev/nvme0n1p1: 256.1 GB, 256059465728 bytes
255 heads, 63 sectors/track, 31130 cylinders, total 500116144 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/nvme0n1p1 doesn't contain a valid partition table


This shows the disk is being seen by Linux and has some partition structure. fdisk was not intended to work with gpt partitions (but gdisk does), only the older BIOS type partitions…so the disk was likely partitioned with gpt. I suppose low level commands to interact with an NVMe disk interface are different than via the usual SATA or SCSI interface, although I’ve never had an SSD to test with. So does gparted not show the disk if you specifically start gparted naming the disk?

sudo gparted /dev/nvme0

hi linuxdev

Thanks for telling me about gdisk.
Also, invoking gparted from the command line using ‘sudo gparted /dev/nvme0n1’ did indeed work!

I repartitioned the NVMe drive (adding a linux swap partition):


ubuntu@tegra-ubuntu:~$ sudo gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4D00563F-BFAC-4838-969D-5572C14E4651
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number Start (sector) End (sector) Size Code Name
1 2048 471861247 225.0 GiB 8300 Linux filesystem
2 471861248 500118158 13.5 GiB 8200 Linux swap


Some benchmarks comparing built-in flash to NVMe disk:

ubuntu@tegra-ubuntu:/media/ubuntu/SSD$ sudo hdparm -Tt /dev/mmcblk0p1
Timing cached reads: 3638 MB in 2.00 seconds = 1820.61 MB/sec
Timing buffered disk reads: 656 MB in 3.01 seconds = 218.18 MB/sec

ubuntu@tegra-ubuntu:/media/ubuntu/SSD$ sudo hdparm -Tt /dev/nvme0n1p1
Timing cached reads: 3816 MB in 2.00 seconds = 1909.40 MB/sec
Timing buffered disk reads: 1816 MB in 3.00 seconds = 604.94 MB/sec

ubuntu@tegra-ubuntu:/media/ubuntu/SSD$ sudo dd if=/dev/zero of=~/output conv=fdatasync bs=384k count=1k; sudo rm -f ~/output
1024+0 records in
1024+0 records out
402653184 bytes (403 MB) copied, 13.94 s, 28.9 MB/s

ubuntu@tegra-ubuntu:/media/ubuntu/SSD$ sudo dd if=/dev/zero of=/media/ubuntu/SSD/output conv=fdatasync bs=384k count=1k; sudo rm -f /media/ubuntu/SSD/output
1024+0 records in
1024+0 records out
402653184 bytes (403 MB) copied, 1.40341 s, 287 MB/s


I will next try to move the filesystem to the new drive, since it seems to be much faster and roomier.

Don’t forget to leave “/boot” on eMMC…it’ll make life simpler, especially if u-boot does not understand the NVMe disk interface.

Long Time Lurker.

I ordered a J100 module as well as the J120, I’ve been impressed so far with them. I’m excited to integrate it into my projects.


Do the Pwr Rst Rec buttons work on your board? I’m afraid mine are not working…

Yes, mvancleave, the buttons on both my J120s seem to work OK. I was able to use them to reflash the module. I do have to remove power to the board to start it back up after a shutdown, but I think that is by design – so it auto-starts upon power up, behaving like an embedded board rather than a PC.

Hi Jurgen,

The new L4T 24.2 for Jetson TX1 in Jetpack 2.3 seems to work very well (24.1 had a bad memory leak). Are you planning to release a kernel for it to support the J120? Better yet, source for the kernel mods required? The only thing you have posted is a binary kernel for LFT 23.2.

Also, any progress on getting the fan to work properly with the J120? It is always on and drives me crazy.

Thanks for the making the best carrier boards for the TX1!


Dear All,

A short update from my side:

I’ve got an Intel 600p 128GB SSD up and running on the J120 board. I had to build a custom kernel with CONFIG_BLK_DEV_NVME=y and CONFIG_PCIEASPM=n

Best Regards,

Another update with similar results to Michael:

Hardware: J120 + Samsung 960 EVO 1TB M.2 NVMe
Software: JetPack 2.3.1

Install buildJestonTX1Kernel scripts (from
Use GUI to enable NVMe support (under block devices) [CONFIG_BLK_DEV_NVME=y]. I did not have to change CONFIG_PCIEASPM with my configuration.

Rebuild and copy kernel.

Works great.


Hi Jurgen,
I agree with tonyvr, please publish sources if possible or provide timely binaries for latest Jetpack releases.


Hi guys, may I know which version of JetPack are you guys using? I am using JetPack 3.0 and the fan doesn’t seem to work, it is not turning when I boot it up. Isn’t the fan on by default? I am using Auvidea J120 carrier board as well.

Thank you.

To get the J120 working properly you will need to get the firmware for it. Unfortunately Auvidea has firmware only for very old Jetpack. I tried to get some support from them but no luck so far.
For the Jetpack 3 you will need to recompile the kernel. I am working on the same thing. I got the IMU working, NVMe support for M.2 SSD is still flaky. I am getting IO buffer errors.
As for the FAN it looks like it is basically constantly “on” for J120 (see earlier posts from Auvidea). I need to figure out how to configure the GPIO as output and default to low in device tree.

More details here: