Cannot find the objectDetector_FastRCNN example

I have deepstream sdk 7.0 installed on my jetson agx orin and as per the following documentation Using a Custom Model with DeepStream — DeepStream documentation 6.4 documentation i was trying to find the example of FastRCNN which as per the above documentation should be at the following path:
/opt/nvidia/deepstream/deepstream/sources/objectDetector_FasterRCNN
but i am not able to find it

and by that example i want to understand how to use a custom model with 2 input layers such as one image input layer and one non image input layer

We no longer support the FasterRCNN in the sdk 7.0. Can you describe the input layers of your model in detail? We’ll talk about how deploy that with DeepStream.

okay so actually i am trying to build a pipeline like the following one:
face detection → face recognition → face swap

and my face swap face model has two input layers as you can see in following error:

1st layer is image input layer named “target” and second layer is 1x512 dimensional layer that takes embeddings as input named “source” from the face recognition model.

and i was using the “operate-on-gie” flag in the config file to pass the embeddings but got the above error.

so please can you provide me any other reference example that has non-image input layer or provide me complete guidance, on how to do this using deepstream-app.

You can consider using our nvdspreprocess. plugin to customize your own tensor data. We will consider adding similar demo in the future, please pay attention to our subsequent version.

actually i used the NvDsInferInitializeInputLayers function to pass face embeddings from a file to the non image input layer and then i have set my face swap model to operate on the output faces obtained from face detection model using the “operate-on-gie=1” flag

but now can you tell me that as per the following pipeline
face detection → face recognition → face swap

how can i input one of the embeddings generated by recognition model directly to the non image input layer of face swap model.

one layer of my face swap model takes faces as image input obtained from the face detection model and the second layer takes a single face embedding generated by the face recognition model

You can try to use the pipeline below to implement your needs.

face detection → face recognition → preprocess - >face swap

You can refer to our deepstream-pose-classification to learn how to customize the tensor data yourself.

actually i am still not able to understand how to do this as i want to the following:

obtained kps from face detector → using kps crop the whole frame to size 128x128 around each face → perform some cv2 based affine transformations → input the image to model.

can you please help me with this?

We have many similar demos for this. You can refer the link I attached before, like deepstream-gaze-app.
You need to implement the algorithm yourself in the nvdsvideotemplate plugin of the demos. You can learn how to get the kps from the demo.

I have looked at the TAO app for pre-processing. I want to modify the input frame and apply CV2 warpaffine on the face and still i have few questions about it:

  1. is implementing custom pre-process correct approach for the same?
  2. If that is then, shall I take a copy of inbuf of NvDsPreProcessBatch *batch inside CustomTensorPreparation function and then apply warpaffine.
  3. after waraffine do i need to fill in NvDsPreProcessCustomBuf buf
  4. How can I find the dimension or size of inbuf
  5. And using cv2 is giving me error of cuda support as follows so what should i do about it:

Please refer to the demo I attached to learn how use preprocess first.
You should set the with and height and dims in the config file like config_preprocess_bodypose_classification.txt. Then you can get the size and dims in your code.
You can refer to the source code to learn how to fill the buffer nvdspreprocess_lib.cpp.
About the cv2 error, you can learn how to compile the opencv with CUDA version by referring to the JEP.

if i want to do warpaffine on full frame after the PGIE has inferred and before SGIE then do i need to use GstBuffer inbuf or the converted_frame_ptr?

GstBuffer inbuf

i printed the height and width by accessing converted_frame_ptr and it is 128 i.e size of input of SGIE but i want FULL FRAME for my task so how can i get full frame?

If you want to get the FULL FRAME, you should get that from the inbuf.
About how to get the image from the Gstbuffer and process that with OpenCV, you can refer to our source code convert_batch_and_push_to_process_thread in the sources\gst-plugins\gst-dsexample\gstdsexample_optimized.cpp.

thanks, as per your advice i observed the gstdsexample_optimized.cpp code but i found that in the function convert_batch_and_push_to_process_thread GstBuffer inbuf is not being used

Yes. It just show how to map the buffer out. Then you can add your own processing. Or you can also refer to 142683 to learn how to process the buffer of the NvbufSurface.

thanks for the help and i have implemented the code you provided above but i am still not getting proper full frame image.
i am using the following code
issue_1.txt (3.2 KB)
and the following frame is being saved which is not correct


(please note i am using DS7.0)

It could be a problem with the memory organization. Could you check the num_planes of the nvbufsurface on your side? If there are multuple planes, you should get the raw data from different planes according to the colorformat.

i checked the num_planes and there are two planes with their stride values as follows

Number of Planes: 2
Plane 0: Stride = 1920, Offset = 0
Plane 1: Stride = 1920, Offset = 2228224

now the stride of second plane seems to be somewhat more than the expected as
Offset for Plane 1=Height of Plane 0×Stride of Plane 0 and
Offset=1080×1920=2073600 < 2228224

so i guess there is some issue and also now i am using the following modified code
issue_1.txt (7.2 KB)
and now i am getting a print that nvbufsurface: Wrong buffer index (0) in my terminal output along with other prints.

so can you please guide me where in my code i am making a mistake?

You can refer to the code below to dump the image to a file and check that.

static int
NvBufSurfaceWriteToFile (NvBufSurface *srcSurf, const char * fileName, const  char *mode)
{
    NvBufSurfaceCreateParams params;
    NvBufSurface *opSurf = NULL;
    FILE *fp = NULL;
    int ret = -1;
    NvBufSurfaceParams *surfparams;
    unsigned char *ptr = NULL;

    memset (&params, 0, sizeof(NvBufSurfaceCreateParams));
    params.width  = srcSurf->surfaceList[0].width;
    params.height = srcSurf->surfaceList[0].height;
    params.gpuId  = srcSurf->gpuId;
    params.memType = NVBUF_MEM_SYSTEM;
    params.colorFormat = srcSurf->surfaceList[0].colorFormat;

    if (NvBufSurfaceCreate(&opSurf, srcSurf->batchSize, &params) < 0){
        printf ("nvbufsurface: failed to write to file\n");
        goto exit;
    }
    if (NvBufSurfaceCopy(srcSurf, opSurf) < 0){
        printf ("nvbufsurface: failed to write to file\n");
        goto exit;
    }

    fp = fopen (fileName, mode);
    if (!fp){
        goto exit;
    }

    for (unsigned int k =0 ; k < srcSurf->batchSize; k++){
        surfparams = &opSurf->surfaceList[k];
        for (unsigned int i = 0; i < surfparams->planeParams.num_planes; i++){
            ptr = (unsigned char *)surfparams->dataPtr + surfparams->planeParams.offset[i];
            for (unsigned int j = 0; j < surfparams->planeParams.height[i]; j++){
                fwrite (ptr, 1, surfparams->planeParams.width[i]*surfparams->planeParams.bytesPerPix[i], fp);
                ptr += surfparams->planeParams.pitch[i];
            }
        }
    }
    fclose (fp);

    ret = 0;
exit:
    if (opSurf) {
        NvBufSurfaceDestroy(opSurf);
    }
    return ret;
}