Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)6.1
**• DeepStream Version 7.1
**• JetPack Version (valid for Jetson only)6.1
**• TensorRT Version 10.3
**• NVIDIA GPU Driver Version (valid for GPU only)540.4.0 • Issue Type( questions, new requirements, bugs)
The Type of this mechine is Orin AGX 64G • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I would like to inquire about two questions regarding the use of YOLOV8S model combined with JPS Docker
There are differences in the size of yolov8s_DAT_noqdq.onnx compared to the normal PT I downloaded from the official website
The naming of yolov8s_DAT_noqd_DLA.engine cannot be converted by the program, and the program usually outputs a suffix of b8,dla0uint8
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.739920824 ^[[32m 19^[[00m 0xaaab12107d90 ^[[36mINFO ^[[00m ^[[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/configs/…/models/yolov8s_Detector/yolov8s_DAT_noqdq_DLA.engine
0:00:00.739983959 ^[[32m 19^[[00m 0xaaab12107d90 ^[[33;01mWARN ^[[00m ^[[00m nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2026> [UID = 1]: Backend has maxBatchSize 1 whereas 8 has been requested
0:00:00.739997047 ^[[32m 19^[[00m 0xaaab12107d90 ^[[33;01mWARN ^[[00m ^[[00m nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/configs/…/models/yolov8s_Detector/yolov8s_DAT_noqdq_DLA.engine failed to match config params, trying rebuild
0:00:00.749479482 ^[[32m 19^[[00m 0xaaab12107d90 ^[[36mINFO ^[[00m ^[[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
I was trying to configure ai_nvr using DeepStream 7.1 and YOLOv8 with multiple streams. I followed the guide you suggested, but I wasn’t able to complete it because it failed when trying to build deepstream_test5 due to missing dependencies. I tried fixing it by installing the dependencies, but I couldn’t find a solution related to cuda_runtime_api.h.
Additionally, I noticed that l4t-repo.nvidia.com:80 doesn’t work every time I try to install something with apt.
I then repeated the process using the “deepstream:7.1-triton-multiarch” image, and it built successfully.
So, my question is: Do we need to copy the yolov8s-files folder into the Triton container from the public container, or is there a better way? Maybe I’m missing something.