Upgrade JP3.1 to 3.2

Hi all,

I have several systems deployed at clients.
We are thinking of upgrading their solution from jetpack 3.1 to jetpack 3.2 in order to have TRT3 & CUDA 9.
As i know from another post, we cant just put TRT3 & CUDA 9 because CUDA9 implies a new GPU driver from you. thats why it wont work on jp3.1.

Is there a way for us to upgrade the GPU driver or even the software image to be able to be JP3.2 compliant ?

Thank you,
Regards,
François

I don’t believe you can just directly copy something over. What I would suggest is you get a 3.1 version working the way you like, and clone it as a reference copy (the mmcblk0p1 rootfs “APP” partition). Then create another Jetson with 3.2dp on it (one with packages updated and “sha1sum -c /etc/nv_tegra_release” passing). Clone this. Loopback mount both read-only.

Don’t modify your reference clones. Take a 3.2dp copy, loopback mount this read-write (this takes up about 90GB of space already for three images). Experiment with looking at the diff between the two read-only copies, and migrate one item at a time into the read-write clone. When you have what you think to be valid, then re-use the system.img (in other words, flash using the read-write clone).

In terms of what to diff and how…any binary application on a running version of 3.1 can have “ldd ” run to see what it links against. If this is in the list from “/etc/nv_tegra_release”, then you know it is dependent upon NVIDIA-specific drivers. If you compiled the application, then you will need to recompile it against the 3.2dp release (anything you compiled against something in 3.1 will need to be recompiled…it’s the ones which need to link against NVIDIA-specific files which will need help migrating).

Note that a loopback mounted clone can work as a sysroot when cross-compiling from a PC. In many cases you can just natively compile on the Jetson itself. The trick is to have a way to methodically go through the programs needing migration and rebuild them in a known order. I don’t know what programs you are going to run into so I have no way to predict anything other than the files related to “/etc/nv_tegra_release”.

If you do decide to do this you will probably want to fix a bug in the flash.sh script. Within this script look for function “build_fsimg”. Near line 466 you’ll want to edit like this so it properly handles loop devices when more than one loop device is open (original lines are commented out):

463 build_fsimg ()
 464 {
 465         echo "Making $1... ";
 466 #       local loop_dev="${LOOPDEV:-/dev/loop0}";
 467         local loop_dev="$(losetup --find)";
 468         if [ ! -b "${loop_dev}" ]; then
 469                 echo "${loop_dev} is not block device. Terminating..";
 470                 exit 1;
 471         fi;
 472 #       loop_dev=`losetup --find`;
 473         if [ $? -ne 0 ]; then

With this above edit you can clone/restore/flash and it will work correctly when loop devices have been opening and closing.

actually, im not sure i need to do all that
the problem is just because of the GPU driver as i understood.
So if nvidia has it, maybe we could just patch the 3.1 to have a 3.2 compliant system.

That may be possible. The GPU driver is tied to the Xorg Video ABI, and it looks to be the same in both releases. You can verify:

cat /var/log/Xorg.0.log | gawk '/Module ABI/,/Video/'

Since the two share the same ABI you are probably in luck. I don’t know what other files are linked in though…remember that sometimes a library is linked to another library…there may be a chain of dependencies before the kernel system calls are ever reached. I’d suggest trying it (just not on a production system).