Deepstream6.3 Resolution not supported on this GPUError Code : 801

• Hardware Platform (Jetson / GPU): NVIDIA L4
• DeepStream Version: 6.3
• TensorRT Version: 8.5.3.1
• NVIDIA GPU Driver Version: 12.0
I got GPUError Code: 801 when playing a video with size 2560x1920. Error code details are as below:

Error String :
Resolution : 2560x1920
Max Supported (wxh) : 2032x2032
Resolution not supported on this GPUError Code : 801
[NvMultiObjectTracker] De-initialized

Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary9-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary5-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary4-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary20-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary3-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary8-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary6-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary21-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary19-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:secondary2-nvinference-engine
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinferlpr.cpp(893): gst_nvinferlpr_start (): /GstPipeline:pipeline0/GstNvInferLpr:primary-inference
[ERROR] gst-resource-error-quark: Failed to process frame. (1): gstv4l2videodec.c(2273): gst_v4l2_video_dec_handle_frame (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-00/GstDecodeBin:decodebin0/nvv4l2decoder:nvv4l2decoder0:
Maybe be due to not enough memory or failing driver

I would like to determine the maximum resolution my graphics card (GPU) supports on Linux. What specific commands can I use to extract this information?

There are no such commands.This log is reported by video codec sdk.

You can refer to this code snippets.

#include <algorithm>
#include <cstring>
#include <cuda.h>
#include <iostream>
#include <nvcuvid.h>
#include <string>
#include <vector>

class NvidiaDecoderCapsQuery {
private:
  CUcontext cudaContext;
  bool initialized;

  struct CodecInfo {
    cudaVideoCodec codec;
    std::string name;
  };

  std::vector<CodecInfo> supportedCodecs = {
      {cudaVideoCodec_MPEG1, "MPEG1"},
      {cudaVideoCodec_MPEG2, "MPEG2"},
      {cudaVideoCodec_MPEG4, "MPEG4"},
      {cudaVideoCodec_VC1, "VC1"},
      {cudaVideoCodec_H264, "H264"},
      {cudaVideoCodec_JPEG, "JPEG"},
      {cudaVideoCodec_H264_SVC, "H264_SVC"},
      {cudaVideoCodec_H264_MVC, "H264_MVC"},
      {cudaVideoCodec_HEVC, "HEVC"},
      {cudaVideoCodec_VP8, "VP8"},
      {cudaVideoCodec_VP9, "VP9"},
      {cudaVideoCodec_AV1, "AV1"}};

public:
  NvidiaDecoderCapsQuery() : initialized(false) {}

  ~NvidiaDecoderCapsQuery() { cleanup(); }

  bool initialize() {
    CUresult result;

    result = cuInit(0);
    if (result != CUDA_SUCCESS) {
      std::cerr << "Failed to initialize CUDA driver API: " << result
                << std::endl;
      return false;
    }

    int deviceCount;
    result = cuDeviceGetCount(&deviceCount);
    if (result != CUDA_SUCCESS || deviceCount == 0) {
      std::cerr << "No CUDA devices found or failed to get device count: "
                << result << std::endl;
      return false;
    }

    CUdevice device;
    result = cuDeviceGet(&device, 0);
    if (result != CUDA_SUCCESS) {
      std::cerr << "Failed to get CUDA device: " << result << std::endl;
      return false;
    }

    result = cuCtxCreate(&cudaContext, 0, device);
    if (result != CUDA_SUCCESS) {
      std::cerr << "Failed to create CUDA context: " << result << std::endl;
      return false;
    }

    initialized = true;
    return true;
  }

  void cleanup() {
    if (initialized && cudaContext) {
      cuCtxDestroy(cudaContext);
      cudaContext = nullptr;
      initialized = false;
    }
  }

  bool getDecoderCaps(cudaVideoCodec codec, const std::string &codecName,
                      unsigned int &maxWidth, unsigned int &maxHeight) {
    if (!initialized) {
      std::cerr << "CUDA context not initialized" << std::endl;
      return false;
    }

    CUVIDDECODECAPS decodeCaps;
    memset(&decodeCaps, 0, sizeof(decodeCaps));
    decodeCaps.eCodecType = codec;
    decodeCaps.eChromaFormat = cudaVideoChromaFormat_420; // 最常用的色度格式
    decodeCaps.nBitDepthMinus8 = 0;                       // 8-bit

    CUresult result = cuvidGetDecoderCaps(&decodeCaps);
    if (result != CUDA_SUCCESS) {
      std::cerr << "Failed to get decoder caps for " << codecName << ": "
                << result << std::endl;
      return false;
    }

    if (!decodeCaps.bIsSupported) {
      std::cout << codecName << ": Not supported" << std::endl;
      maxWidth = 0;
      maxHeight = 0;
      return false;
    }

    maxWidth = decodeCaps.nMaxWidth;
    maxHeight = decodeCaps.nMaxHeight;

    return true;
  }

  void queryAllCodecsMaxResolution() {
    if (!initialize()) {
      std::cerr << "Failed to initialize CUDA context" << std::endl;
      return;
    }

    std::cout << "=== NVIDIA GPU Decoder Maximum Resolution Query ==="
              << std::endl;
    std::cout
        << "Querying maximum decode resolution for all supported codecs..."
        << std::endl;
    std::cout << std::endl;

    unsigned int globalMaxWidth = 0;
    unsigned int globalMaxHeight = 0;
    std::string bestCodec;

    for (const auto &codecInfo : supportedCodecs) {
      unsigned int maxWidth, maxHeight;
      if (getDecoderCaps(codecInfo.codec, codecInfo.name, maxWidth,
                         maxHeight)) {
        std::cout << "Codec: " << codecInfo.name
                  << " - Max Resolution: " << maxWidth << "x" << maxHeight;

        unsigned long long totalPixels =
            (unsigned long long)maxWidth * maxHeight;
        unsigned long long globalMaxPixels =
            (unsigned long long)globalMaxWidth * globalMaxHeight;

        if (totalPixels > globalMaxPixels) {
          globalMaxWidth = maxWidth;
          globalMaxHeight = maxHeight;
          bestCodec = codecInfo.name;
        }

        if (maxWidth >= 7680 && maxHeight >= 4320) {
          std::cout << " (8K UHD supported)";
        } else if (maxWidth >= 3840 && maxHeight >= 2160) {
          std::cout << " (4K UHD supported)";
        } else if (maxWidth >= 1920 && maxHeight >= 1080) {
          std::cout << " (Full HD supported)";
        }

        std::cout << std::endl;
      }
    }

    std::cout << std::endl;
    std::cout << "=== Summary ===" << std::endl;
    if (globalMaxWidth > 0 && globalMaxHeight > 0) {
      std::cout << "Maximum decode resolution across all codecs: "
                << globalMaxWidth << "x" << globalMaxHeight
                << " (Best codec: " << bestCodec << ")" << std::endl;

      unsigned long long totalPixels =
          (unsigned long long)globalMaxWidth * globalMaxHeight;
      std::cout << "Total pixels: " << totalPixels << " ("
                << (totalPixels / 1000000.0) << " Megapixels)" << std::endl;
    } else {
      std::cout << "No supported codecs found or failed to query." << std::endl;
    }
  }

  void querySpecificCodec(cudaVideoCodec codec, const std::string &codecName) {
    if (!initialize()) {
      std::cerr << "Failed to initialize CUDA context" << std::endl;
      return;
    }

    std::cout << "=== Detailed Query for " << codecName << " ===" << std::endl;

    CUVIDDECODECAPS decodeCaps;
    memset(&decodeCaps, 0, sizeof(decodeCaps));
    decodeCaps.eCodecType = codec;
    decodeCaps.eChromaFormat = cudaVideoChromaFormat_420;
    decodeCaps.nBitDepthMinus8 = 0;

    CUresult result = cuvidGetDecoderCaps(&decodeCaps);
    if (result != CUDA_SUCCESS) {
      std::cerr << "Failed to get decoder caps: " << result << std::endl;
      return;
    }

    if (!decodeCaps.bIsSupported) {
      std::cout << "Codec " << codecName << " is not supported on this GPU."
                << std::endl;
      return;
    }

    std::cout << "Codec: " << codecName << std::endl;
    std::cout << "Supported: " << (decodeCaps.bIsSupported ? "Yes" : "No")
              << std::endl;
    std::cout << "Maximum Width: " << decodeCaps.nMaxWidth << std::endl;
    std::cout << "Maximum Height: " << decodeCaps.nMaxHeight << std::endl;
    std::cout << "Maximum Macroblocks: " << decodeCaps.nMaxMBCount << std::endl;
    std::cout << "Minimum Width: " << decodeCaps.nMinWidth << std::endl;
    std::cout << "Minimum Height: " << decodeCaps.nMinHeight << std::endl;
  }
};

void printUsage(const char *programName) {
  std::cout << "Usage: " << programName << " [options]" << std::endl;
  std::cout << "Options:" << std::endl;
  std::cout << "  -a, --all        Query all supported codecs (default)"
            << std::endl;
  std::cout << "  -c, --codec <codec>  Query specific codec (h264, hevc, vp9, "
               "av1, etc.)"
            << std::endl;
  std::cout << "  -h, --help       Show this help message" << std::endl;
  std::cout << std::endl;
  std::cout << "Examples:" << std::endl;
  std::cout << "  " << programName << "                 # Query all codecs"
            << std::endl;
  std::cout << "  " << programName << " -c h264         # Query H.264 only"
            << std::endl;
  std::cout << "  " << programName << " --codec hevc    # Query HEVC only"
            << std::endl;
}

cudaVideoCodec getCodecFromString(const std::string &codecStr) {
  std::string lower = codecStr;
  std::transform(lower.begin(), lower.end(), lower.begin(), ::tolower);

  if (lower == "h264")
    return cudaVideoCodec_H264;
  if (lower == "hevc" || lower == "h265")
    return cudaVideoCodec_HEVC;
  if (lower == "vp8")
    return cudaVideoCodec_VP8;
  if (lower == "vp9")
    return cudaVideoCodec_VP9;
  if (lower == "av1")
    return cudaVideoCodec_AV1;
  if (lower == "mpeg2")
    return cudaVideoCodec_MPEG2;
  if (lower == "mpeg4")
    return cudaVideoCodec_MPEG4;
  if (lower == "vc1")
    return cudaVideoCodec_VC1;
  if (lower == "jpeg")
    return cudaVideoCodec_JPEG;

  return cudaVideoCodec_NumCodecs; // Invalid codec
}

int main(int argc, char *argv[]) {
  NvidiaDecoderCapsQuery query;

  if (argc == 1) {
    query.queryAllCodecsMaxResolution();
    return 0;
  }

  for (int i = 1; i < argc; i++) {
    std::string arg = argv[i];

    if (arg == "-h" || arg == "--help") {
      printUsage(argv[0]);
      return 0;
    } else if (arg == "-a" || arg == "--all") {
      query.queryAllCodecsMaxResolution();
      return 0;
    } else if (arg == "-c" || arg == "--codec") {
      if (i + 1 >= argc) {
        std::cerr << "Error: --codec requires a codec name" << std::endl;
        printUsage(argv[0]);
        return 1;
      }

      std::string codecStr = argv[++i];
      cudaVideoCodec codec = getCodecFromString(codecStr);

      if (codec == cudaVideoCodec_NumCodecs) {
        std::cerr << "Error: Unknown codec '" << codecStr << "'" << std::endl;
        return 1;
      }

      query.querySpecificCodec(codec, codecStr);
      return 0;
    } else {
      std::cerr << "Error: Unknown option '" << arg << "'" << std::endl;
      printUsage(argv[0]);
      return 1;
    }
  }

  return 0;
}

Save this code snippet as nvidia_decoder_caps.cpp and build it within a DeepStream-8.0 container, assuming you downloaded Video_Codec_Interface_13.0.19.

gcc -o nvidia_decoder_caps nvidia_decoder_caps.cpp  -I/root/videocaps/Video_Codec_Interface_13.0.19/Interface -I/usr/local/cuda/include/ -L/usr/lib/x86_64-linux-gnu/  -lnvcuvid -lstdc++ -lcuda

I am using a A40 GPU, I got this result.

=== NVIDIA GPU Decoder Maximum Resolution Query ===
Querying maximum decode resolution for all supported codecs...

Codec: MPEG1 - Max Resolution: 4080x4080 (4K UHD supported)
Codec: MPEG2 - Max Resolution: 4080x4080 (4K UHD supported)
Codec: MPEG4 - Max Resolution: 2048x2048 (Full HD supported)
Codec: VC1 - Max Resolution: 2048x2048 (Full HD supported)
Codec: H264 - Max Resolution: 4096x4096 (4K UHD supported)
Codec: JPEG - Max Resolution: 32768x16384 (8K UHD supported)
H264_SVC: Not supported
H264_MVC: Not supported
Codec: HEVC - Max Resolution: 8192x8192 (8K UHD supported)
Codec: VP8 - Max Resolution: 4096x4096 (4K UHD supported)
Codec: VP9 - Max Resolution: 8192x8192 (8K UHD supported)
Codec: AV1 - Max Resolution: 8192x8192 (8K UHD supported)

=== Summary ===
Maximum decode resolution across all codecs: 32768x16384 (Best codec: JPEG)
Total pixels: 536870912 (536.871 Megapixels)

Is this deepstream 6.3 available for use? I am currently deploying the application on deepstream 6.3, is there any way to know where the code is and how to get it?

The code above can be compiled directly and has nothing to do with deepstream; it only uses the APIs provided by the user-level driver of the video codec SDK.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.