Hi. I’m trying to write a simple sample application of DCF tracker which is based on VPI3. And I’m facing a problem with the “vpiSubmitCropScalerBatch” function. After I put a frame and data in it, I’m waiting for outPatches with some non-zero data. But I always get zeros.
As suggested the software is written on the ground of another sample application which is KLT tracker, which is supplied with VPI3.
This is part of code:
#define NUM_SEQUENCES 10
#define MAX_OBJECTS 10
...
VPIPayload cropPayload;
CHECK_STATUS(vpiCreateCropScaler(backend, NUM_SEQUENCES, MAX_OBJECTS, &cropPayload));
...
int patchWidth = createParams.featurePatchSize * createParams.hogCellSize; // it doesn't work with other values either
int patchHeight = patchWidth * MAX_OBJECTS;
...
/// fetchFrame return VPI_IMAGE_FORMAT_RGB8 image
frames[0] = fetchFrame(frame);
// debug. checking the correctness of input parameters
getNumObj(inObjects); // checking that VPI_ARRAY_TYPE_DCF_TRACKED_BOUNDING_BOX array contains objects
// Extract from the first frame the image patches of each object given its current bounding box.
CHECK_STATUS(vpiSubmitCropScalerBatch(stream, backend, cropPayload, frames, NUM_SEQUENCES,
inObjects, patchWidth, patchHeight, objPatches));
// debug. checking the correctness of output patches
extractImages(objPatches); // <- always zeros
Hi. I have found a solution for this problem. Need call vpiStreamSync(stream) after calling vpiSubmitCropScalerBatch. But new problem has appeared. After calling vpiSubmitDCFTrackerLocalizeBatch for a new frame nothing has changed. Selected ROIs don’t move.
What do you mean by the CUDA format? I create images by vpiImageCreate and get data by vpiImageLockData/vpiImageUnlock.
I was able to make the tracker respond only if these values from VPIDCFTrackedBoundingBox are set: filterLR, filterChannelWeightsLR. but it doesn’t work correctly with them (bounding boxes jump on the spot or fly away). it is unclear what they should be. I tried the values from 0.0001 to 1. Nothing work correctly
What do you mean by the detected objects? I use that scheme: read frame → load to a sequence → vpiSubmitCropScalerBatch → vpiSubmitDCFTrackerLocalizeBatch → read objects → vpiSubmitCropScalerBatch → vpiSubmitDCFTrackerUpdateBatch ->Swap inObjects and outObjects → repeat
To get better tracking quality, users need to handle the keypoint detection and refinement.
For our tracker sample in Deepstream, this is handled by the deep learning detector.