Nerf

Will NERF run on AGX Orin?
https://developer.nvidia.com/blog/googles-new-ai-model-improves-3d-image-synthesis-of-outdoor-scenes/

Don’t see the plan yet.

@kayccc on 5.0_DP Jetpack

 python3 run_nerf.py --config config_fern.txt
2022-05-18 11:20:25.682825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loaded image data (378, 504, 3, 20) [378.         504.         407.56579161]
Loaded ./data/nerf_llff_data/fern 16.985296178676084 80.00209740336334
recentered (3, 5)
[[ 1.0000000e+00  1.8605668e-10  7.4923900e-10  0.0000000e+00]
 [-1.8605668e-10  1.0000000e+00  7.4923900e-10 -4.4703485e-09]
 [-7.4923900e-10 -7.4923900e-10  1.0000000e+00 -1.8626452e-10]]
Data:
(20, 3, 5) (20, 378, 504, 3) (20, 2)
HOLDOUT view is 12
Loaded llff (20, 378, 504, 3) (120, 3, 5) [378.     504.     407.5658] ./data/nerf_llff_data/fern
Auto LLFF holdout, 8
DEFINING BOUNDS
NEAR FAR 0.0 1.0
2022-05-18 11:20:29.468525: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2022-05-18 11:20:29.478626: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.478845: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377
pciBusID: 0000:00:00.0
2022-05-18 11:20:29.478925: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-05-18 11:20:29.487971: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-05-18 11:20:29.614174: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-05-18 11:20:29.615073: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-05-18 11:20:29.616200: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-05-18 11:20:29.617907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-05-18 11:20:29.618517: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-05-18 11:20:29.618745: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.619006: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.619099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0
2022-05-18 11:20:29.663611: I tensorflow/core/platform/profile_utils/cpu_utils.cc:109] CPU Frequency: 31250000 Hz
2022-05-18 11:20:29.664523: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x194eb70 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-05-18 11:20:29.664573: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2022-05-18 11:20:29.747584: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.747996: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3213fd0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-05-18 11:20:29.748035: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
2022-05-18 11:20:29.748370: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.748487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377
pciBusID: 0000:00:00.0
2022-05-18 11:20:29.748552: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-05-18 11:20:29.748611: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-05-18 11:20:29.748655: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-05-18 11:20:29.748697: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-05-18 11:20:29.748738: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-05-18 11:20:29.748802: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-05-18 11:20:29.748848: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-05-18 11:20:29.748983: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.749165: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:29.749250: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0
2022-05-18 11:20:29.749374: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-05-18 11:20:30.855562: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-05-18 11:20:30.855661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212]      0 
2022-05-18 11:20:30.855696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0:   N 
2022-05-18 11:20:30.856097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:30.856824: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2022-05-18 11:20:30.857274: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2022-05-18 11:20:30.857611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8624 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
MODEL 63 27 <class 'int'> <class 'int'> True
(?, 90) (?, 63) (?, 27)
MODEL 63 27 <class 'int'> <class 'int'> True
(?, 90) (?, 63) (?, 27)
Found ckpts []
get rays
done, concats
shuffle rays
done
Begin
TRAIN views are [ 1  2  3  4  5  6  7  9 10 11 12 13 14 15 17 18 19]
TEST views are [ 0  8 16]
VAL views are [ 0  8 16]
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

2022-05-18 11:20:39.680420: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
WARNING:tensorflow:From /home/nvidia/nerf/run_nerf_helpers.py:14: The name tf.log is deprecated. Please use tf.math.log instead.

/home/nvidia/.local/lib/python3.8/site-packages/numpy/lib/npyio.py:518: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
  arr = np.asanyarray(arr)
saved weights at ./logs/fern_test/model_000000.npy
saved weights at ./logs/fern_test/model_fine_000000.npy
saved weights at ./logs/fern_test/optimizer_000000.npy
fern_test 0 10.223166 0.18372275 0
ter time 3.05752
fern_test 1 11.544757 0.14077903 1
iter time 0.85160
fern_test 2 11.822284 0.12979306 2
iter time 0.84847
fern_test 3 11.948731 0.12532637 3
iter time 0.84748
fern_test 4 11.892645 0.12746276 4
iter time 0.84787
fern_test 5 12.786126 0.10450999 5
iter time 0.84408
fern_test 6 12.833903 0.10381667 6
iter time 0.84370
fern_test 7 13.050461 0.09953837 7
iter time 0.84286
fern_test 8 13.325475 0.09237677 8
iter time 0.84402
fern_test 9 12.932078 0.09938743 9
iter time 0.84515
fern_test 100 16.233994 0.046855204 100
iter time 0.83321
fern_test 200 17.684744 0.034361884 200
iter time 0.83071
fern_test 300 18.10268 0.030678026 300
iter time 0.82718
fern_test 400 18.578419 0.027066227 400
iter time 0.82932
fern_test 500 19.14442 0.023837132 500
iter time 0.82705

For issue, you may need to open at Issues · bmild/nerf · GitHub

@kayccc
there seem no issues
the process seems running
but it will take long hours to get it done ~20hrs maybe until 20.000 iterations

Hello, I would like to know if using NeRF I can print a bust in 3d, and I would also like to know which graphics card is the best to work with neRF.
Thank you

It took a while to get on AGX the output below


On Desktop PC with Discrete GPU’s it would be faster highly likely

However, Google NeRF at some point shifted to jax [ use of Google Tensor Processing Units [TPU]] where it seems to perform even 20x times faster than on Nvidia GPU https://github.com/google-research/google-research/tree/master/jaxnerf

Moreover, Nvidia rolled out its NeRF

@kayccc will Nvidia NERF run on Jetson as Google NERF does?
No luck here nvidia nerf build fails with 37% Build target glfw_objects

cmake --build build --config RelWithDebInfo -j8
[  1%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/common.cu.o
[  3%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/context.c.o
[  5%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/common_device.cu.o
[  7%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/monitor.c.o
[  9%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/input.c.o
[ 11%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o
[ 12%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cpp_api.cu.o
[ 14%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/init.c.o
gcc: error: unrecognized command line option ‘-mf16c’
gcc: error: unrecognized command line option ‘-mf16c’
gcc: error: unrecognized command line option ‘-mf16c’
gcc: error: unrecognized command line option ‘-mf16c’
make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:90: dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/common_device.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:76: dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/common.cu.o] Error 1
make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:104: dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cpp_api.cu.o] Error 1
make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:118: dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:206: dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 18%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/vulkan.c.o
[ 18%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/window.c.o
[ 20%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/x11_init.c.o
[ 22%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/x11_monitor.c.o
[ 24%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/x11_window.c.o
[ 25%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/xkb_unicode.c.o
[ 27%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/posix_time.c.o
[ 29%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/posix_thread.c.o
[ 31%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/glx_context.c.o
[ 33%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/egl_context.c.o
[ 35%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/osmesa_context.c.o
[ 37%] Building C object dependencies/glfw/src/CMakeFiles/glfw_objects.dir/linux_joystick.c.o
[ 37%] Built target glfw_objects
make: *** [Makefile:91: all] Error 2

seems users of Nvidia software are out of luck again while Google solution just works

As I said previously, I didn’t see the plan in Jetson development, you will need to open question at Issues · bmild/nerf · GitHub

@kayccc
Hi, How R U?
As I pointed out, Google tensorflow Nerf [bmild] does run on Jetson without issues.
However, the NERF which doesn’t run on Jetson [ neither it runs on a server with A100 though] is NVlabs Nerf that has different github repository URL than the one you shared in the post above
So opening issue at Google NERF bmild github repository for running their implementation on Jetson doesn’t make sense, as it works already.
Moreover, It would make sense to start an issue at NVlabs NERF [instant nerf] GitHub - NVlabs/instant-ngp: Instant neural graphics primitives: lightning fast NeRF and more repository, as it won’t run on Jetson.
Butt folks already opened few issues in that regard at instant-ngp repo of which noone got responded by maintainers

You’re right, I mess up with both githubs.
Issue should be posted at GitHub - NVlabs/instant-ngp: Instant neural graphics primitives: lightning fast NeRF and more

Fro Jetson side, I didn’t see plan.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.