Gst-launch-1.0 pipeline frame size shrink

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Xavier
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5

I am new to gstreamer pipeline. I have built a customized tracker library for nvdstracker plugin based on deepstream-app and my video sources are all in size of 1920x1080. When I ran the deepstream-app, the frame size looks normal by 1920x1080 with 2x2 tiled display. But when I worked with gst-launch-1.0 as following command, the frame size shrinks to 370x640 with 2x2 tiled display. I found it because I looked at the diagnostics of the bounding boxes and the max of bottom edge of bbox they can hit is around 370 and the max of right edge of bbox is around 640. I guess there is a scaling factor but I don’t know how to restore it correctly. Here is my command:

gst-launch-1.0 filesrc location = /home/dewei/algo_share/trafficYolo_deepstream/videos/t1.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=4 ! queue ! nvinfer config-file-path= /home/dewei/algo_share/trafficYolo_deepstream/config_infer_primary_yoloV3.txt ! nvtracker ll-lib-file=/home/dewei/algo_share/gst-nvdstracker-iti/libnvds_customtracker.so ! queue ! nvvideoconvert ! nvmultistreamtiler rows=2 columns=2 width=1920 height=1080 ! queue ! nvdsosd ! queue ! nvegltransform ! nveglglessink sync=0 filesrc location = /home/dewei/algo_share/trafficYolo_deepstream/videos/t2.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! m.sink_1 filesrc location = /home/dewei/algo_share/trafficYolo_deepstream/videos/t3.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! m.sink_2 filesrc location = /home/dewei/algo_share/trafficYolo_deepstream/videos/t4.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! m.sink_3

What do you mean by this?

In our video application from surveillance camera, we have vehicles moving downward and hit the bottom of the FOV. For the bounding box, we have coordinates xl, xr, ytop and ybot. Theoretically, the ybot of downward moving cars can reach at frame height i.e. ybot=1080 when the downward moving car hit the bottom of FOV. I notice in deepstream-app, ybot can reach its maximum at 1080 but in my gst-launch pipeline ybot can only reach at 380. In other words, it seems to me that gst-launch pipeline makes the frame resized by a scaling factor and the frame height is resized from 1080 to 380.

Where and how did you get your bbox?

I wrote a custom tracker library by following the link here here:

NvMOTStatus NvMOT_Process(NvMOTContextHandle contextHandle,
     NvMOTProcessParams *pParams,
     NvMOTTrackedObjBatch *pTrackedObjectsBatch
)

At the very beginning, I extracted each bbox information from the NvMOTProcessParam by the following script:


NvMOTStatus NvMOT_Process(NvMOTContextHandle contextHandle,
                          NvMOTProcessParams *pParams,
                          NvMOTTrackedObjBatch *pTrackedObjectsBatch)
{
  //std::cout << "Entered Process Call" << std::endl;
  //NvMOTFrame* MOTFrame = pParams->frameList[0];

  int numStreamsInBatch = pParams->numFrames;
 
  for (int i = 0; i < numStreamsInBatch; i++)
  {

    NvMOTFrame *fr = &pParams->frameList[i];
    //NvBufSurfaceParams* frameBuffer = fr->bufferList[i];
    NvMOTObjToTrackList *detectedObjectsList = &fr->objectsIn;
    NvMOTObjToTrack *detectedObjects = detectedObjectsList->list;
    NvMOTTrackedObjList *trackedObjectsList = &pTrackedObjectsBatch->list[i];
    int numDetectedObjects = detectedObjectsList->numFilled;
    int numTrackedObjects = trackedObjectsList->numFilled;
    int trackNum = 0;
    int matches, x, y;

    //1. Extract blob data from NvMOTObjToTrack
    for (size_t j = 0; j < numDetectedObjects; j++)
    {
      NvMOTObjToTrack *detected_object = &detectedObjects[j];
      int x = detected_object->bbox.x;
      int y = detected_object->bbox.y;
      int wd = detected_object->bbox.width;
      int ht = detected_object->bbox.height;
      blobsArray[j].x_left = x;
      blobsArray[j].y_top = y;
      blobsArray[j].width = wd;
      blobsArray[j].height = ht;
      blobsArray[j].area = wd * ht;
      blobsArray[j].x_centroid = (x + wd) / 2;
      blobsArray[j].y_centroid = (y + ht) / 2;
      blobsArray[j].id = j;
    }

    //2. Extract track data from NvMOTTrackedObj
    for (size_t j = 0; j < numTrackedObjects; j++)
    {
      NvMOTTrackedObj *tracked_object = &trackedObjectsList->list[j];
      Track *pTrack = &tracksArray[j];
      pTrack->id = tracked_object->trackingId % 1000;
      pTrack->is_confirmed = tracked_object->reserved[TRACK_CONFIRMATION_INDEX];
      pTrack->hits = tracked_object->reserved[TRACK_HITS_INDEX];
      pTrack->misses = tracked_object->reserved[TRACK_MISS_INDEX];
      ;
      pTrack->age = tracked_object->age;
      pTrack->match_score = 0.0;
      pTrack->classId = tracked_object->classId;
      pTrack->blob_data.x_left = tracked_object->bbox.x;
      pTrack->blob_data.y_top = tracked_object->bbox.y;
      pTrack->blob_data.width = tracked_object->bbox.width;
      pTrack->blob_data.height = tracked_object->bbox.height;
      usedTrackIDs[pTrack->id] = 1;
      tracked_object->trackingId = 0xFFFFFFFFFFFFFFFF;
    }

    //3. Perform IOU tracking
    numTrackedObjects = TrackObjects(tracksArray, blobsArray, numDetectedObjects, usedTrackIDs, blobTrackIdMap);
    trackNum = 0;

    //4. Store track data in NvMOTTrackedObj
    initCarInfo();

    for (int trackIdx = 0; trackIdx < MAX_TRACKED_OBJECTS; trackIdx++)
    {
      Track *pTrack = &tracksArray[trackIdx];
      carInfo *pmc = &car[i][trackIdx];

      if (pTrack->id > -1)
      {
        NvMOTTrackedObj *tracked_object = &trackedObjectsList->list[trackNum];
        tracked_object->trackingId = ((i + 1) * 1000) + pTrack->id;
        tracked_object->reserved[TRACK_CONFIRMATION_INDEX] = pTrack->is_confirmed;
        tracked_object->reserved[TRACK_HITS_INDEX] = pTrack->hits;
        tracked_object->reserved[TRACK_MISS_INDEX] = pTrack->misses;
        tracked_object->age = pTrack->age;
        if (pTrack->misses == 0)
        {
          size_t objToTrackIdx = pTrack->blob_data.id;
          NvMOTObjToTrack *detected_object = &detectedObjects[objToTrackIdx];
          tracked_object->classId = detected_object->classId;
          tracked_object->bbox = {detected_object->bbox.x,
                                  detected_object->bbox.y,
                                  detected_object->bbox.width,
                                  detected_object->bbox.height};
        }
        else
        {
          tracked_object->classId = pTrack->classId;
          tracked_object->bbox = {pTrack->blob_data.x_left,
                                  pTrack->blob_data.y_top,
                                  pTrack->blob_data.width,
                                  pTrack->blob_data.height};
        }

The struct detected_object stores all the bbox info for each object and I passed it to my struct pTrack for further processing.

It’s solved by adding property in tracker-size