Add nvcc flags in omniverse c++ extension (add_cuda_build_support)

Hello everyone!

I am following this guide to enable cuda through add_cuda_build_supportfunction in Premake5.lua file.

How can I specify flags for nvcc compiler?

I am not sure about this. What exactly are you trying to do? I mean overall. Some kind of CUDA kit app ?

I am developing a c++ omnigraph node to simulate deformations based on a model we are working on. The node would take mesh information (vertices and triangles…etc) from the “Read Prim” node and also the transforms. Below is a picture of the node…


and I also uploaded the ogn file if it is any help
OgnBeamElementsNodeOgn.zip (1.2 KB)

The model works by monitoring collisions between the deformable mesh and specified obstacle.

I am using CUDA to evaluate the contact on the meshes intersecting faces but I do not know how to set nvcc flags in premake5.lua. I tried to set them through buildoptions{} with no effects (I made intentional typos in the flags to see if the compiler would produce errors but they were ignored). below is the premake script file I am using.

-- Setup the basic extension information.
local ext = get_current_extension_info()
project_ext(ext)


-- --------------------------------------------------------------------------------------------------------------
-- Helper variable containing standard configuration information for projects containing OGN files.
local ogn = get_ogn_project_information(ext, "linear_deformables")

-- --------------------------------------------------------------------------------------------------------------
-- Link folders that should be packaged with the extension.
repo_build.prebuild_link {
    { "data", ext.target_dir.."/data" },
    { "docs", ext.target_dir.."/docs" },
}
-- --------------------------------------------------------------------------------------------------------------
-- Copy the __init__.py to allow building of a non-linked ogn/ import directory.
-- In a mixed extension this would be part of a separate Python-based project but since here it is just the one
-- file it can be copied directly with no build dependencies.
repo_build.prebuild_copy {
    { "linear_deformables/__init__.py", ogn.python_target_path }
}


-- --------------------------------------------------------------------------------------------------------------
-- Breaking this out as a separate project ensures the .ogn files are processed before their results are needed.
project_ext_ogn( ext, ogn )
-- --------------------------------------------------------------------------------------------------------------
-- Build the C++ plugin that will be loaded by the extension.
project_ext_plugin(ext, ogn.plugin_project)
    
    filter {"files:**.cu"}
        buildoptions { "--use_fat_math", "--maxregcount=32" }
    filter {}  -- reset filter
    add_cuda_build_support()
  
    add_files("source", "cpp_files/extension_management")
    add_files("nodes", "cpp_files/omnigraph_nodes")
    add_files("solver", "cpp_files/solver")

    add_ogn_dependencies(ogn)
    
    links {"yaml-cpp", "mpfr", "gmp"}
    
    cppdialect "C++17"

It sounds pretty advanced but I am curious why you are not using our built in advanced physics and our new WARP to do this simulation work.

Also are you trying to do all this simulation directly in kit, or are you importing simulation data from other sources?

Hello Richard
Thanks for your reply. We starte working with the builtin deformable model almost 2 years ago…
Object is a plastic beam around 15 cm in length and around 6 mm of diameter…which was supposed to be manipulated by a robot arm (e.g. UR5)…

We didn’t get the rigidity we wanted from the builtin model and also consulted on the forum here. I remember there was a problem of convergence for high stiffness objects…so we worked a beam elements based model and imported in omniverse as kit extension…we got some results but it is still under development.

I am not aware of the WARP tool..you think it is helpful for this kind of task? I will check it out

Thanks

Ok thanks. And have you tried to use another specific mechanical simulation software to do this kind of advanced simulation, and just import the data afterwards? Ansys comes to mind. There must be off the shelf applications that do advanced stress analysis and failure.

Here is the download for WARP. It’s our most advanced physical simulation software, available both for stand alone CUDA and as an extension for Kit.

In the meantime, let me see if I get you some help on you enabling CUDA through Lua

Try using:

-- Helper function to implement a build step that preprocesses .cu files (CUDA code) for compilation on Linux
-- @nvcc_host_compiler_flags: Flags to pass on to the host compiler
-- @nvcc_flags: Flags to pass on to the CUDA compiler
function cuda._make_nvcc_command_linux(nvcc_host_compiler_flags, nvcc_flags)

But this would be offline simulation, wouldn’t it? We want to enable robotic manipulation of semi-rigid objects through ROS2 in real-time, so we solve for deformation every time step considering collisions and feedback developed forces, which can be used to enable various kinds of Force control tasks in Simulation.

Thank you very much, Richard!

Thank you martinj for sharing the Lua snippet! I will use it and then report the result

– Helper function to implement a build step that preprocesses .cu files (CUDA code) for compilation on Linux
@nvcc_host_compiler_flags: Flags to pass on to the host compiler
@nvcc_flags: Flags to pass on to the CUDA compiler
function cuda._make_nvcc_command_linux(nvcc_host_compiler_flags, nvcc_flags)

It is working now perfectly. but I had to use the below generated implementation as using the function directly resulted in an error of attempting to access a nil value of global cuda.

-- function Implementation
cuda = cuda or {}
function cuda._make_nvcc_command_linux(nvcc_host_compiler_flags, nvcc_flags)
    -- pick up CC or default to gcc
    local hostcc = os.getenv("CC") or "gcc"

    -- join an array of flags into one string
    local function join(tbl)
        return table.concat(tbl or {}, " ")
    end

    -- assemble the nvcc command
    return string.format(
        'nvcc -ccbin %s -Xcompiler "%s" %s -c %%{file.relpath} -o %%{cfg.objdir}/%%{file.basename}.o',
        hostcc,
        join(nvcc_host_compiler_flags),
        join(nvcc_flags)
    )
end
------------------------------------------------------
-- usage
add_cuda_build_support()
cuda.host_flags = { "-fPIC", "-std=c++17" }
cuda.nvcc_flags  = { "--resource-usage" }
filter "files:**.cu"
        buildcommands {
            cuda._make_nvcc_command_linux(cuda.host_flags, cuda.nvcc_flags)
        }
        buildoutputs { "%{cfg.objdir}/%{file.basename}.o" }
filter {}  -- clear the filter

Thank you very much

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.