How to obtain the buffer of a frame of type kGpuCuda in the custom processing of the nvinferserver component

I want to check if the preprocessed data meets expectations, so I want to obtain the image output in the extraInputProcess of IInferCustomProcessor . How can I get the frame’s image data here?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Which sample do you use?

You can refer to the following code

/opt/nvidia/deepstream/deepstream/sources/libs/ds3d/datafilter/lidar_preprocess/lidar_preprocess_filter_impl.cpp

In fact, this problem is related to the configuration file you use, This is the key code

 FrameGuard lidarFrame;
 code = dataMap.getGuardData(key, lidarFrame);
 if (!isGood(code)) {
      LOG_ERROR("dataMap getGuardData %s kLidarFrame failed\n", key.c_str());
       return code;
 }
 radars.emplace_back((float *)lidarFrame->base());
 shape = lidarFrame->shape();

The above code can get the buffer pointer and shape of tensor

I apologize for not being clear. I am using nvinferserver element, and I want to obtain image data during the extraInputProcess. Below is the official demo:

#include <inttypes.h>
#include <unistd.h>
#include <cassert>
#include <unordered_map>
#include "infer_custom_process.h"
#include "nvbufsurface.h"
#include "nvdsmeta.h"

typedef struct _GstBuffer GstBuffer;
using namespace nvdsinferserver;

#defin INFER_ASSERT assert

class NvInferServerCustomProcess : public IInferCustomProcessor {
     // memtype for ``extraInputs``, set ``kGpuCuda`` for performance
     void supportInputMemType(InferMemType& type) override { type = InferMemType::kGpuCuda; }
     // for LSTM loop. return false if not required.
     bool requireInferLoop() const override { return false; }
     // skip extraInputProcess if there is no extra input tensors
     NvDsInferStatus extraInputProcess(const std::vector<IBatchBuffer*>& primaryInputs, std::vector<IBatchBuffer*>& extraInputs, const IOptions* options) override {
          return NVDSINFER_SUCCESS;
     }
     // output tensor postprocessing function.
     NvDsInferStatus inferenceDone(const IBatchArray* outputs, const IOptions* inOptions) override
     {
          GstBuffer* gstBuf = nullptr;
          std::vector<uint64_t> streamIds;
          NvDsBatchMeta* batchMeta = nullptr;
          std::vector<NvDsFrameMeta*> frameMetaList;
          NvBufSurface* bufSurf = nullptr;
          std::vector<NvBufSurfaceParams*> surfParamsList;
          int64_t unique_id = 0;

          INFER_ASSERT (inOptions->getValueArray(OPTION_NVDS_SREAM_IDS, streamIds) == NVDSINFER_SUCCESS);
          INFER_ASSERT(inOptions->getObj(OPTION_NVDS_BUF_SURFACE, bufSurf) == NVDSINFER_SUCCESS);
          INFER_ASSERT(inOptions->getObj(OPTION_NVDS_BATCH_META, batchMeta) == NVDSINFER_SUCCESS);
          INFER_ASSERT(inOptions->getInt(OPTION_NVDS_UNIQUE_ID, unique_id) == NVDSINFER_SUCCESS);
          INFER_ASSERT(inOptions->getValueArray(OPTION_NVDS_BUF_SURFACE_PARAMS_LIST, surfParamsList) == NVDSINFER_SUCCESS);
          INFER_ASSERT(inOptions->getValueArray(OPTION_NVDS_FRAME_META_LIST, frameMetaList) == NVDSINFER_SUCCESS);

          uint64_t nsTimestamp = UINT64_MAX; // nano-seconds
          if (inOptions->hasValue(OPTION_TIMESTAMP)) {
               INFER_ASSERT(inOptions->getUInt(OPTION_TIMESTAMP, nsTimestamp) == NVDSINFER_SUCCESS);
          }

          std::unordered_map<std::string, SharedIBatchBuffer> tensors;
          for (uint32_t i = 0; i < outputs->getSize(); ++i) {
               SharedIBatchBuffer outTensor = outputs->getSafeBuf(i);
               INFER_ASSERT(outTensor);
               auto desc = outTensor->getBufDesc();
               tensors.emplace(desc.name, outTensor);
          }

          // parsing output tensors
          float* boxesPtr = (float*)tensors["output_bbox"]->getBufPtr(0);
          auto& bboxDesc = tensors["output_bbox"]->getBufDesc();
          float* scoresPtr = (float*)tensors["output_score"]->getBufPtr(0);
          float* numPtr = (float*)tensors["output_bbox_num"]->getBufPtr(0);
          int32_t batchSize = bboxDesc.dims.d[0]; // e.g. tensor shape [Batch, num, 4]

          std::vector<std::vector<NvDsInferObjectDetectionInfo>> batchedObjs(batchSize);
          // parsing data into batchedObjs
          ...
          // attach to NvDsBatchMeta
          for (int iB = 0; iB < batchSize; ++iB) {
               const auto& objs = batchedObjs[iB];
               for (const auto& obj : objs) {
                    NvDsObjectMeta* objMeta = nvds_acquire_obj_meta_from_pool(batchMeta);
                    objMeta->unique_component_id = unique_id;
                    objMeta->confidence = obj.detectionConfidence;
                    objMeta->class_id = obj.classId;
                    objMeta->rect_params.left = obj.left;
                    objMeta->rect_params.top = obj.top;
                    objMeta->rect_params.width = obj.width;
                    objMeta->rect_params.height = obj.height;
                    // other settings
                    ...
                    // add NvDsObjectMeta obj into NvDsFrameMeta frame.
                    nvds_add_obj_meta_to_frame(frameMetaList[iB], objMeta, NULL);
               }
          }
     }
};

extern "C" {
IInferCustomProcessor* CreateInferServerCustomProcess(const char* config, uint32_t configLen)
{
     return new NvInferServerCustomProcess();
} }

I want to obtain image data in GPU CUDA memory type here.

NvDsInferStatus extraInputProcess(const std::vector<IBatchBuffer*>& primaryInputs, std::vector<IBatchBuffer*>& extraInputs, const IOptions* options) override {
          return NVDSINFER_SUCCESS;
}

Due to differences between different devices and versions, please provide complete information as applicable to your setup.

I don’t understand your purpose

Do you want the original image or the normalized image?

You can get the original image from nvbufsurface

extraInputProcess is used for user-defined normalization implementation. Cannot be used to dump the default normalized tensor.
You can refer to NvDsInferStatus NetworkPreprocessor::transformImpl in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinferserver/infer_preprocess.cpp

This function can get the normalized tensor.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.