Deepstream-test1 Not Working on deepstream:6.4-samples-multiarch

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Orin Nano Developer Kit
• DeepStream Version
6.4
• JetPack Version (valid for Jetson only)
6.0 DP
• TensorRT Version
8.6
• Issue Type( questions, new requirements, bugs)
I am running the deepstream:6.4-samples-multiarch container and trying to compile the deepstream_test1 app.

docker run -it --net=host --runtime nvidia --privileged --device /dev/video -p 10101:10101 -w /opt/nvidia/deepstream/deepstream-6.4 nvcr.io/nvidia/deepstream:6.4-samples-multiarch

Install the packages mentioned in deepstream-6.4/README

apt update
apt-get install \
    libssl3 \
    libssl-dev \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    gstreamer1.0-alsa \
    libgstrtspserver-1.0-0 \
    libjansson4 \
    libyaml-cpp-dev

Install user additional items

./user_additional_install.sh

go to the deepstream_test1 directory

cd /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test1

Install items in the README

apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
   libgstrtspserver-1.0-dev libx11-dev

Set CUDA_VER in the Makefile to 12.2

apt-get install vim
vim Makefile

Run make

And then I get these errors

root@ubuntu:/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test1# make
cc -c -o deepstream_test1_app.o -DPLATFORM_TEGRA -I../../../includes -I /usr/local/cuda-12.2/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/aarch64-linux-gnu -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include deepstream_test1_app.c
deepstream_test1_app.c:27:10: fatal error: cuda_runtime_api.h: No such file or directory
   27 | #include <cuda_runtime_api.h>
      |          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:64: deepstream_test1_app.o] Error 1

Which makes sense because my CUDA directory does not have that file

root@ubuntu:/usr/local/cuda-12.2/include# ls
nvToolsExt.h  nvToolsExtCuda.h  nvToolsExtCudaRt.h  nvToolsExtOpenCL.h  nvToolsExtSync.h  nvtx3

Am I supposed to be able to compile the apps myself? How do I get all the cude include files?

When I look on my Orin host, I see I have all the files

beauceron@ubuntu:/usr/local/cuda-12.2/include$ ls
builtin_types.h              cudaVDPAU.h                cusolverSp_LOWLEVEL_PREVIEW.h             npps_support_functions.h
channel_descriptor.h         cuda_vdpau_interop.h       cusparse.h                                nv
common_functions.h           cudaVDPAUTypedefs.h        cusparse_v2.h                             nvblas.h
cooperative_groups           cudlaExternalEtbl.hpp      device_atomic_functions.h                 nv_decode.h
cooperative_groups.h         cudla.h                    device_atomic_functions.hpp               nvfunctional
crt                          cufft.h                    device_double_functions.h                 nvJitLink.h
cub                          cufftw.h                   device_functions.h                        nvml.h
cublas_api.h                 cufftXt.h                  device_launch_parameters.h                nvperf_common.h
cublas.h                     cufile.h                   device_types.h                            nvperf_cuda_host.h
cublasLt.h                   cupti_activity.h           driver_functions.h                        nvperf_host.h
cublas_v2.h                  cupti_callbacks.h          driver_types.h                            nvperf_target.h
cublasXt.h                   cupti_checkpoint.h         fatbinary_section.h                       nvPTXCompiler.h
cuComplex.h                  cupti_driver_cbid.h        generated_cuda_gl_interop_meta.h          nvrtc.h
cuda                         cupti_events.h             generated_cudaGL_meta.h                   nvToolsExtCuda.h
cuda_awbarrier.h             cupti.h                    generated_cuda_meta.h                     nvToolsExtCudaRt.h
cuda_awbarrier_helpers.h     cupti_metrics.h            generated_cudart_removed_meta.h           nvToolsExt.h
cuda_awbarrier_primitives.h  cupti_nvtx_cbid.h          generated_cuda_runtime_api_meta.h         nvToolsExtOpenCL.h
cuda_bf16.h                  cupti_pcsampling.h         generated_cuda_vdpau_interop_meta.h       nvToolsExtSync.h
cuda_bf16.hpp                cupti_pcsampling_util.h    generated_cudaVDPAU_meta.h                nvtx3
cuda_device_runtime_api.h    cupti_profiler_target.h    generated_nvtx_meta.h                     sm_20_atomic_functions.h
cudaEGL.h                    cupti_result.h             host_config.h                             sm_20_atomic_functions.hpp
cuda_egl_interop.h           cupti_runtime_cbid.h       host_defines.h                            sm_20_intrinsics.h
cudaEGLTypedefs.h            cupti_sass_metrics.h       library_types.h                           sm_20_intrinsics.hpp
cuda_fp16.h                  cupti_target.h             math_constants.h                          sm_30_intrinsics.h
cuda_fp16.hpp                cupti_version.h            math_functions.h                          sm_30_intrinsics.hpp
cuda_fp8.h                   curand_discrete2.h         mma.h                                     sm_32_atomic_functions.h
cuda_fp8.hpp                 curand_discrete.h          nppcore.h                                 sm_32_atomic_functions.hpp
cudaGL.h                     curand_globals.h           nppdefs.h                                 sm_32_intrinsics.h
cuda_gl_interop.h            curand.h                   npp.h                                     sm_32_intrinsics.hpp
cudaGLTypedefs.h             curand_kernel.h            nppi_arithmetic_and_logical_operations.h  sm_35_atomic_functions.h
cuda.h                       curand_lognormal.h         nppi_color_conversion.h                   sm_35_intrinsics.h
cudalibxt.h                  curand_mrg32k3a.h          nppi_data_exchange_and_initialization.h   sm_60_atomic_functions.h
cuda_occupancy.h             curand_mtgp32dc_p_11213.h  nppi_filtering_functions.h                sm_60_atomic_functions.hpp
cuda_pipeline.h              curand_mtgp32.h            nppi_geometry_transforms.h                sm_61_intrinsics.h
cuda_pipeline_helpers.h      curand_mtgp32_host.h       nppi.h                                    sm_61_intrinsics.hpp
cuda_pipeline_primitives.h   curand_mtgp32_kernel.h     nppi_linear_transforms.h                  surface_functions.h
cuda_profiler_api.h          curand_normal.h            nppi_morphological_operations.h           surface_indirect_functions.h
cudaProfiler.h               curand_normal_static.h     nppi_statistics_functions.h               surface_types.h
cudaProfilerTypedefs.h       curand_philox4x32_x.h      nppi_support_functions.h                  texture_fetch_functions.h
cudart_platform.h            curand_poisson.h           nppi_threshold_and_compare_operations.h   texture_indirect_functions.h
cuda_runtime_api.h           curand_precalc.h           npps_arithmetic_and_logical_operations.h  texture_types.h
cuda_runtime.h               curand_uniform.h           npps_conversion_functions.h               thrust
cuda_stdint.h                cusolver_common.h          npps_filtering_functions.h                vector_functions.h
cuda_surface_types.h         cusolverDn.h               npps.h                                    vector_functions.hpp
cuda_texture_types.h         cusolverRf.h               npps_initialization.h                     vector_types.h
cudaTypedefs.h               cusolverSp.h               npps_statistics_functions.h

Hi

Does this work?

cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1 # or check with /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test1
# CUDA_VER must match to the existing directory in /usr/local.
export CUDA_VER="12.2"
export LD_LIBRARY_PATH=/usr/local/cuda/include:$LD_LIBRARY_PATH
make

No, same issue.

root@ubuntu:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1# make
cc -c -o deepstream_test1_app.o -DPLATFORM_TEGRA -I../../../includes -I /usr/local/cuda-12.2/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/aarch64-linux-gnu -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include deepstream_test1_app.c
deepstream_test1_app.c:27:10: fatal error: cuda_runtime_api.h: No such file or directory
   27 | #include <cuda_runtime_api.h>
      |          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:64: deepstream_test1_app.o] Error 1
root@ubuntu:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1# echo $CUDA_VER
12.2
root@ubuntu:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1# echo $LD_LIBRARY_PATH
/usr/local/cuda/include:/usr/local/cuda-12.2/lib64
root@ubuntu:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1#

I guess I can copy all of the .h files in from my host, but ideally I would be able to build the sample app without needing to get stuff from the host.

Out of curiosity, why are you suggesting using

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1

Instead of

/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test1

Shouldnt I do everything in the deepstream-6.4 directory?

The important part is to include /usr/local/cuda/include in the path variable, and to start make in the app directory you want to compile e.g. /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1.

Inside the deepstream-6.4 docker container, the directory /opt/nvidia/deesptream/deepstream is just a symbolic link.
drawing

Compiling deepstream-test1 seems to work flawlessly. Please, give a try.

Thanks for explaining the link, makes sense.

That is strange because as you can see in the previous logs, my path variable was set to /usr/local/cuda/include but the make still failed. When you list all the files in /usr/local/cuda/include directory, what do you see?

I see this

root@ubuntu:/usr/local/cuda/include# ls
nvToolsExt.h      nvToolsExtCudaRt.h  nvToolsExtSync.h
nvToolsExtCuda.h  nvToolsExtOpenCL.h  nvtx3

Also, are you using this image?

http://nvcr.io/nvidia/deepstream:6.4-samples-multiarch

I met too, samle image can not work, try other image

Like @tanchao7217 said, can you try it with 6.4-triton-multiarch instead?

Ya, the other image has all the expected include files. Seems like a bug in the samples image. Hopefully it can get fixed in the future.

I was able to reproduce the missing header files issue gst.h and cuda_runtime_api.h in 6.4-samples-multiarch. The followng recipe compiles deepstream-test1 successfully.

# https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu
# Tested with docker run of container nvcr.io/nvidia/deepstream:6.4-samples-multiarch

apt-get update -y

apt-get install -y \
libssl3 libssl-dev libgstreamer1.0-0 gstreamer1.0-tools \
gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly \
gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 \
libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler gcc make \
git python3

apt-get install -y libnvinfer8=8.6.1.6-1+cuda12.0 libnvinfer-plugin8=8.6.1.6-1+cuda12.0 libnvparsers8=8.6.1.6-1+cuda12.0 \
libnvonnxparsers8=8.6.1.6-1+cuda12.0 libnvinfer-bin=8.6.1.6-1+cuda12.0 libnvinfer-dev=8.6.1.6-1+cuda12.0 \
libnvinfer-plugin-dev=8.6.1.6-1+cuda12.0 libnvparsers-dev=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0 \
libnvinfer-samples=8.6.1.6-1+cuda12.0 libcudnn8=8.9.4.25-1+cuda12.2 libcudnn8-dev=8.9.4.25-1+cuda12.2

cd /opt/nvidia/deepstream/deepstream
./user_additional_install.sh

cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1
# CUDA_VER must match to the existing directory in /usr/local.
export CUDA_VER="12.1"
export LD_LIBRARY_PATH=/usr/local/cuda/include:$LD_LIBRARY_PATH
make

There is a limitation. Make after setting CUDA_VER to 12.2. fails, but make after setting CUDA_VER to 12.1 or 12.3 finishes successfully.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.