Getting nvc++ on the jetson

Just wondering what is the best way to get nvc++ on the jetson. I know the Jetpack already has nvcc but I don’t see nvc++.

Should I load the hcp-sdk on the jetson or will that break things.
https://docs.nvidia.com/hpc-sdk//index.html

Regards,
Paul

Hi,

nvc++ is part of HPC SDK.
HPC doesn’t support Jetson device.

Thanks.

Hello AastaLLL,

Thanks for the response.

I have already started down this road because I took a NVIDIA course “Scaling GPU-Accelerated Application with the C++ standard library” and would like to perform this on the Jetson. The course compiles with NVC++ compiler for the --stdpar=gpu compiler option. This doesn’t seem available on the NVCC compiler.

I had a look at HPC and noticed that Jetson met the min requirements (see below) so I proceeded to load the docker image and started setting up. So far it looks good, but I’m not that far into it.

Now you are letting me know that this approach will not work?

Thanks,
Paul

"Before running the NVIDIA HPC SDK NGC container, please ensure that your system meets the following requirements.

  • Pascal (sm60), Volta (sm70), Turing (sm75), Ampere (sm80), or Hopper (sm90) NVIDIA GPU(s)
  • CUDA driver version >= 440.33 for CUDA 10.2
  • Docker 19.03 or later which includes support for the --gpus option, or Singularity version 3.4.1 or later
  • For older Docker versions, use nvidia-docker >= 2.0.3
    .
    .
    .
    Multiarch containers for Arm (aarch64) and x86_64 are available for select tags starting with version 21.7."

Some more developments:

The docker container mounts but running the example the mpi complains about problems binding the memory. I looked at the hwloc and found membind:set_proc_membind = 0.

When I exit the docker and run the hwloc I also get a membind:set_proc_membind = 0.

Is there anyway to alleviate this and allow for binding?

Regards,
Paul

mpirun -np 2 --allow-run-as-root --mca orte_base_help_aggregate 0 ./select 100 100 10

WARNING: Open MPI tried to bind a process but failed. This is a
warning only; your job will continue, though performance may
be degraded.

Local host: node0
Application name: ./select
Error message: failed to bind memory
Location: …/…/…/…/…/orte/mca/rtc/hwloc/rtc_hwloc.c:447



WARNING: Open MPI tried to bind a process but failed. This is a
warning only; your job will continue, though performance may
be degraded.

Local host: node0
Application name: ./select
Error message: failed to bind memory
Location: …/…/…/…/…/orte/mca/rtc/hwloc/rtc_hwloc.c:447


E(t=0) = 0.00196
Rank 0: local domain 100x100 (0.00016 GB): 0.0241555 GB/s
All ranks: global domain 200x100 (0.00032 GB): 0.048311 GB/s

I had a look at the docker containers hwloc.

root@node0:/home/mpiuser/cloud# hwloc-info --support
No protocol specified
discovery:pu = 1
discovery:disallowed_pu = 1
discovery:numa = 0
discovery:numa_memory = 0
discovery:disallowed_numa = 0
discovery:cpukind_efficiency = 1
cpubind:set_thisproc_cpubind = 1
cpubind:get_thisproc_cpubind = 1
cpubind:set_proc_cpubind = 1
cpubind:get_proc_cpubind = 1
cpubind:set_thisthread_cpubind = 1
cpubind:get_thisthread_cpubind = 1
cpubind:set_thread_cpubind = 1
cpubind:get_thread_cpubind = 1
cpubind:get_thisproc_last_cpu_location = 1
cpubind:get_proc_last_cpu_location = 1
cpubind:get_thisthread_last_cpu_location = 1
membind:set_thisproc_membind = 0
membind:get_thisproc_membind = 0
membind:set_proc_membind = 0
membind:get_proc_membind = 0
membind:set_thisthread_membind = 1
membind:get_thisthread_membind = 1
membind:set_area_membind = 1
membind:get_area_membind = 1
membind:alloc_membind = 1
membind:firsttouch_membind = 1
membind:bind_membind = 1
membind:interleave_membind = 1
membind:nexttouch_membind = 0
membind:migrate_membind = 1
membind:get_area_memlocation = 1
misc:imported_support = 0

Hi,

You can find some discussion below.
We don’t support HPC on Jetson so the issues and limitations are not tracked.

Thanks.

Hi AastaLLL,

Thank you for the response. It looks like this may not be the way to go. Ultimately, I would like to do C++20 standard focusing on parallelism on both CPUs and GPUs, like in the course. G++ works for the CPU’s only and found that NVC++ works with the CPUs and GPU’s.

When do you think the jetson family be able to ISO C++20 and beyond?

Regards,
Paul

Hi,

You can give it a try.

It looks like most of the C++20 feature is supported in the g++10.
On Jetson, it’s possible to upgrade the G++ version via apt:

https://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.2020

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.