How to initialize the input layer of a custom model for nvinfer?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Dev container on WSL with GPU A3000
• DeepStream Version - 7.0 (Container)
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.6.1
• NVIDIA GPU Driver Version (valid for GPU only) - 551.86
• Issue Type( questions, new requirements, bugs) - question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am trying to setup a simple deepstream pipeline for creating a stereo disparity map with two inputs and am using ESS DNN model. Before starting with deepstream I verified the working of the model and it works well with tensorrt. I used the probe functions to customize the input hoping this is the input initialization. But I get this error “Failed to initialize non-image input layers”. I need help with a direction to cusotmize the nvinfer or input initialization in python.

This is the deserialized model for knowing the input and output dimensions along with the error.

INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input_left      3x576x960       
1   INPUT  kFLOAT input_right     3x576x960       
2   OUTPUT kFLOAT output_conf     576x960         
3   OUTPUT kFLOAT output_left     576x960         

0:00:06.319003623 26086 0x56400d53df00 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/dnn_stereo_disparity_v4.0.0/ess.engine
0:00:06.343570956 26086 0x56400d53df00 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initNonImageInputLayers() <nvdsinfer_context_impl.cpp:1622> [UID = 1]: More than one input layers but custom initialization function not implemented
0:00:06.343609185 26086 0x56400d53df00 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1386> [UID = 1]: Failed to initialize non-image input layers
0:00:06.357136893 26086 0x56400d53df00 WARN                 nvinfer gstnvinfer.cpp:912:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:06.357977710 26086 0x56400d53df00 WARN                 nvinfer gstnvinfer.cpp:912:gst_nvinfer_start:<primary-inference> error: Config file path: disparity_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(912): gst_nvinfer_start (): /GstPipeline:deepstream-pipeline/GstNvInfer:primary-inference:
Config file path: disparity_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Exiting pipeline

Here is the config file,
disparity_pgie_config.txt (2.8 KB)

This is my probe function attached to pgie sink


# Probe function to extract and save input images
def pgie_sink_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer")
        return Gst.PadProbeReturn.OK

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    
    left_image = None
    right_image = None

    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        # Get the frame's image data
        frame_number = frame_meta.frame_num
        surface = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

        if frame_number == 0:
            left_image = np.array(surface, copy=True, order='C', dtype=np.uint8)
        elif frame_number == 1:
            right_image = np.array(surface, copy=True, order='C', dtype=np.uint8)

        try:
            l_frame = l_frame.next
        except StopIteration:
            break

    if left_image is not None and right_image is not None:
        combined_images = np.array([left_image, right_image])

        # Replace the input image data with the array of arrays
        for i, img in enumerate(combined_images):
            surface = pyds.get_nvds_buf_surface(hash(gst_buffer), i)
            np.copyto(surface, img)

    return Gst.PadProbeReturn.OK

The nvinferserver is recommended to be used for the two image tensor input cases since the nvinfer only support fixed parameters input with the non-image input layers.

The interface IInferCustomProcessor can be overrode to customize your own preprocessing and postprocessing. There is a sample in /opt/nvidia/deepstream/deepstream/sources/TritonOnnxYolo/nvdsinferserver_custom_impl_yolo/

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.