That location tends to be for a non-Ubuntu desktop install. This is the case for desktop CUDA install via a “.run” script instead of a package manager. In an example case, I use Fedora, and CUDA is out of date versus modern Fedora…CUDA is packaged directly for up to Fedora 21, but currently Fedora 23 has been out for awhile. The run script for version 7.0 CUDA gets put in “/usr/local/cuda-7.0”, while version 7.5 gets put in “/usr/local/cuda-7.5”. Whichever version is being used, the script then adds a symbolic link “/usr/local/cuda” which points at either the 7.0 or the 7.5 entry.
If your 3rd party software expects that directory, I suspect the software is designed for the desktop x86_64 and not ARM. This would not guarantee you couldn’t get the software working, but it does mean installation and compile won’t be as simple as it should be (and failure is still possible for other reasons). One important complication which could stop build is that currently although CUDA on JTX1 is 64-bit, the user space is 32-bit (this will be 64-bit on the next L4T release). If the software you are working with is ok with 64-bit libs in a 32-bit link environment, you can proceed.
The proper way to fix compile (assuming 32-bit linking won’t hurt) would likely be to first see if the software has a configure type script which can name an alternate location for the CUDA install. Barring this, perhaps a symbolic link “cuda” could be added in “/usr/local” which points at “/usr/lib/arm-linux-gnueabihf”, but odds of that working are low.
If there is more information on the software you are compiling a more specific answer might be possible.