How to deploy the development environment in the deepstream-l4t container

After deploying the l4t container, I found that NVIDIA’s header file library files are missing when developing on it. How should I install these libraries?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

I don’t know what headers and libraries you want, nor what versions you use.

Please describe your problem in detail

When I compiled my project, I found that related files such as NvInfer.h and cuda_runtime_api.h did not

This is my host environment, I need to build my deepstream application in the Deepstream-l4t container

Distribution: ubuntu 20.04 focal
Python : 3.8.10
cuDNN :
TensorRT :
Model:NVIDIA Orin NX Developer Kit
Module:NVIDIA Jetson Oriin NX(16GB ram)
DeepStream: 6.2

Do you use this images ?

If you want build your deepstream application

It is recommended to use the image of

Here are their differences. just for deployment

You can also copy header files from host to docker.

I am using the image Deepstream-l4t-samples.
But I use the image of iot and there is no relevant header file

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.