nppiMalloc_8u_C3 return NULL under docker

Hi all,

I use Jetson Xavier hardware platform.
The OS is Jackpet R32 4.3.

I download docker image from GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T .
I use the dustynv/ros:galactic-ros-base-l4t-r32.4.4 docker image.
nppiMalloc_8u_C3() function return NULL when it run on the docker image.
It can’t normally return correct address.
Why nppiMalloc_8u_C3() return NULL?
Could you give me suggestion?

Following is my test code and command.
<<src/main.cpp>>

#include <stdlib.h>
#include <stdio.h>

#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include "npp.h"
 
int main()
{
    int nLineStep_npp = 0;
    Npp8u *pu8BGR_dev = nppiMalloc_8u_C3(800, 600, &nLineStep_npp);
    if(pu8BGR_dev == NULL)
      printf("pu8BGR_dev is NULL\n");

    return 0;
}

<<package.xml>>

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>demo</name>
  <version>0.0.0</version>
  <description>TODO: Package description</description>
  <maintainer email="jason@todo.todo">jason</maintainer>
  <license>TODO: License declaration</license>

  <buildtool_depend>ament_cmake</buildtool_depend>

  <depend>rclcpp</depend>

  <test_depend>ament_lint_auto</test_depend>
  <test_depend>ament_lint_common</test_depend>

  <export>
    <build_type>ament_cmake</build_type>
  </export>
</package>

<<CMakeLists.txt>>

cmake_minimum_required(VERSION 3.8)
project(demo)

# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
  set(CMAKE_CXX_STANDARD 14)
endif()

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

set(CMAKE_BUILD_TYPE Debug)
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)


include_directories( /usr/local/cuda-10.2/targets/aarch64-linux/include/)

find_package(CUDA REQUIRED)
add_executable(demo src/main.cpp)
target_link_libraries(demo ${CUDA_LIBRARIES} ${CUDA_TOOLKIT_ROOT_DIR}/lib64/libnppig.so ${CUDA_TOOLKIT_ROOT_DIR}/lib64/libnppisu.so ${CUDA_TOOLKIT_ROOT_DIR}/lib64/libnppidei.so)

ament_target_dependencies(demo rclcpp )

target_include_directories(demo PUBLIC
  $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
  $<INSTALL_INTERFACE:include>)

install(TARGETS 
  demo
  DESTINATION lib/${PROJECT_NAME})

if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  ament_lint_auto_find_test_dependencies()
endif()

ament_package()

Following command to wait up docker image

nvidia@xavier:~/$ docker run --user $(id -u):$(id -g) --runtime nvidia -it --privileged --gpus all --rm --network host -e DISPLAY=$DISPLAY  -v /tmp/.X11-unix/:/tmp/.X11-unix -v $XAUTH:$XAUTH -e XAUTHORITY=$XAUTH  dustynv/ros:galactic-ros-base-l4t-r32.4.4

build command

colcon build --packages-select demo

run result

nvidia@xavier:~/workspace/$ ros2 run demo demo
pu8BGR_dev is NULL

Best regards
-Jason

Hi,

Could you upgrade your device to r32.4.4 to see if the same error occurs?
We can run it correctly on a r32.7.1 host with l4t-base:r32.4.3, l4t-base:r32.4.4 and l4t-base:r32.7.1 conatiner.

Thanks.

Hi AastaLLL,

Thank you about your reply.
Because the camera driver base on R32.4.3.
I can’t update to R32.4.4 now.

I think, if the host and container use same version.
It maybe can fix my problem.

I tried to pull l4t-base:r32.4.3.
But l4t-base:r32.4.3 doesn’t contain ROS2.
Your last mail said, you tried on l4t-base:r32.4.3.
Is there l4t-base:r32.4.3 version that contain ROS2(galactic)?
Could you give me the l4t-base:r32.4.3 that contain ROS2(galactic)?

Best regards

-Jason

Hi @jason.tseng914, none of the l4t-base contain ROS2 - my ROS2 containers are based on l4t-base. However I only started building these ROS2 containers for L4T R32.4.4, and I no longer have systems with L4T R32.4.3, so if you need them for R32.4.3 you will need to build them yourself. You can do that by following the repo at https://github.com/dusty-nv/jetson-containers

Hi dusty_nv,

I found the solution.
I tested on R32.4.3 and R32.4.4 container.
If I use nvidia account, it will fail.
It only work when use root (superuser account).

Thank you very much.

-Jason