How to set custom tracker in deepstream with detection interval > 0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
NVIDIA GeForce GTX 1650

• DeepStream Version
6.2

• JetPack Version (valid for Jetson only)

• TensorRT Version
TensorRT-8.6.1.6

• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 525.125.06

• Issue Type( questions, new requirements, bugs)
questions

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

# pipeline
filesrc->h264parse->nvv4l2decoder->nvstreammux->nvinfer(yolov5)->customer track->nvvideoconvert->nvdsosd->nv3dsink
# customer tracker context code
NvMOTContext::NvMOTContext(const NvMOTConfig &configIn, NvMOTConfigResponse &configResponse) {
    custom_config_file = configIn.customConfigFilePath;
    configResponse.summaryStatus = NvMOTConfigStatus_OK;
}

NvMOTStatus NvMOTContext::processFrame(const NvMOTProcessParams *params, NvMOTTrackedObjBatch *pTrackedObjectsBatch) {
    for (uint streamIdx = 0; streamIdx < pTrackedObjectsBatch->numFilled; streamIdx++){
        NvMOTTrackedObjList   *trackedObjList = &pTrackedObjectsBatch->list[streamIdx];
        NvMOTFrame            *frame          = &params->frameList[streamIdx];

        /* get tracker per stream */
        if (NanoTrackMap.find(frame->streamID) == NanoTrackMap.end()) {
            NanoTrackMap[frame->streamID] = std::make_shared<TrackerNanoTRT>(custom_config_file);
        }
        
        /* get input frame */
        NvBufSurfaceParams *surfaceParams = frame->bufferList[0];
        cv_frame = cv::Mat(surfaceParams->height, surfaceParams->width, CV_8UC4, (char*)surfaceParams->dataPtr);
        cv::cvtColor(cv_frame, cv_frame, cv::COLOR_RGBA2BGR);

        /* get input rect */
        if (frame->objectsIn.numFilled) {
            objToTrack = &frame->objectsIn.list[0];
            std::cout << ">>> [detect]  class idx: " << ((NvDsObjectMeta*)(objToTrack->pPreservedData))->class_id << std::endl;
            confidence  = objToTrack->confidence;
            class_idx   = objToTrack->classId;
            rect.x      = (int)(objToTrack->bbox.x);
            rect.y      = (int)(objToTrack->bbox.y);
            rect.width  = (int)(objToTrack->bbox.width);
            rect.height = (int)(objToTrack->bbox.height);
        }
        else {
            std::cout << ">>> [track]  class idx: " << ((NvDsObjectMeta*)(objToTrack->pPreservedData))->class_id << std::endl;
            objToTrack->confidence  = confidence;
            objToTrack->classId     = class_idx;
            objToTrack->bbox.x      = rect.x;
            objToTrack->bbox.y      = rect.y;
            objToTrack->bbox.width  = rect.width;
            objToTrack->bbox.height = rect.height;
        }

        /* tracking */
        NanoTrackMap.at(frame->streamID)->update(cv_frame, rect);
        confidence = NanoTrackMap.at(frame->streamID)->tracking_score;

        /* output */
        trackedObj->confidence                     = confidence;
        trackedObj->classId                        = (uint16_t)class_idx;
        trackedObj->trackingId                     = 0;

        trackedObj->bbox.x                         = (float)rect.x;
        trackedObj->bbox.y                         = (float)rect.y;
        trackedObj->bbox.width                     = (float)rect.width;
        trackedObj->bbox.height                    = (float)rect.height;

        // trackedObj->age                            = frame->frameNum;
        trackedObj->age = 1;
        trackedObj->associatedObjectIn             = objToTrack;
        trackedObj->associatedObjectIn->doTracking = true;

        trackedObjList->streamID     = frame->streamID;
        trackedObjList->frameNum     = frame->frameNum;
        trackedObjList->valid        = true;
        trackedObjList->list         = trackedObj;
        trackedObjList->numFilled    = 1;
        trackedObjList->numAllocated = 1;
    }

    return NvMOTStatus_OK;
}

NvMOTStatus NvMOTContext::processFramePast(const NvMOTProcessParams *params,
                                           NvDsPastFrameObjBatch *pPastFrameObjectsBatch) {
    return NvMOTStatus_OK;
}

NvMOTStatus NvMOTContext::removeStream(const NvMOTStreamId streamIdMask) {
    if (NanoTrackMap.find(streamIdMask) != NanoTrackMap.end()){
        std::cout << "Removing tracker for stream: " << streamIdMask << std::endl;
        NanoTrackMap.erase(streamIdMask);
    }
    return NvMOTStatus_OK;
}

customer tracker for SOT(single object track), etc: NanoTrack, the tracker with tensorrt backbone extract instance region and template region feature, and fusion features with tensorrt neckhead, finally get score and rect on next frame. that can get rect without detection, like dcf.
when i set interval > 0 in yolov5 detection configure, got this msg:

>>> [detect]  class idx: 4
>>> [track]  class idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [detect]  class idx: 4
>>> [track]  class idx: 4
>>> [track]  class idx: 4
>>> [track]  class idx: 4
>>> [track]  class idx: 0

set interval=30, the class idx is ((NvDsObjectMeta*)(objToTrack->pPreservedData))->class_id in the context code, with debug in deepstream6.3 deepstream/sources/gst-plugins/gst-nvtracker/, i found this func NvTrackerProc::fillMOTFrame in nvtracker_proc.cpp line 714:

void NvTrackerProc::fillMOTFrame(SurfaceStreamId ssId,
                 const ProcParams& procParams,
                 const NvDsFrameMeta& frameMeta,
                 NvMOTFrame& motFrame,
                 NvMOTTrackedObjList& trackedObjList)
{
  uint32_t i = 0;
  NvMOTObjToTrackList *pObjList = &motFrame.objectsIn;

  NvBufSurfaceParams *pInputBuf = &procParams.input.pSurfaceBatch->surfaceList[frameMeta.batch_id];
  float scaleWidth = ((float)m_Config.trackerWidth/pInputBuf->width);
  float scaleHeight = ((float)m_Config.trackerHeight/pInputBuf->height);

  pObjList->numFilled = 0;
  NvDsObjectMetaList *l = NULL;
  NvDsObjectMeta *objectMeta = NULL;
  NvMOTObjToTrack *pObjs = pObjList->list;

  for (i = 0, l = frameMeta.obj_meta_list;
     i < pObjList->numAllocated && l != NULL;
     i++, l = l->next)
  {
    objectMeta = (NvDsObjectMeta *)(l->data);
    NvOSD_RectParams *rectParams = &objectMeta->rect_params;

    pObjs[i].bbox.x = rectParams->left * scaleWidth;
    pObjs[i].bbox.y = rectParams->top * scaleHeight;
    pObjs[i].bbox.width = rectParams->width * scaleWidth;
    pObjs[i].bbox.height = rectParams->height * scaleHeight;
    pObjs[i].classId = objectMeta->class_id;
    pObjs[i].confidence = objectMeta->confidence;
    pObjs[i].doTracking = true;
    pObjs[i].pPreservedData = objectMeta;
    pObjList->numFilled++;
 
   ... ...

in this func, the tracker had copy class_id in frameMeta to motFrame which is params of context in NvMOTContext::processFrame, so why i get Class mismatch error?

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

You have customerized low level tracker and the tracker plugin code. Can you help to check if the customerized low level tracker returned the right class id 4?

yes, the customed low level tracker return right class id 4

# func: NvMOTStatus NvMOTContext::processFrame
NvMOTStatus NvMOTContext::processFrame(const NvMOTProcessParams *params, NvMOTTrackedObjBatch *pTrackedObjectsBatch) {
    for (uint streamIdx = 0; streamIdx < pTrackedObjectsBatch->numFilled; streamIdx++){
        NvMOTTrackedObjList   *trackedObjList = &pTrackedObjectsBatch->list[streamIdx];
        NvMOTFrame            *frame          = &params->frameList[streamIdx];

        /* get tracker per stream */
        if (NanoTrackMap.find(frame->streamID) == NanoTrackMap.end()) {
            NanoTrackMap[frame->streamID] = std::make_shared<TrackerNanoTRT>(custom_config_file);
        }
        
        /* get input frame */
        NvBufSurfaceParams *surfaceParams = frame->bufferList[0];
        cv_frame = cv::Mat(surfaceParams->height, surfaceParams->width, CV_8UC4, (char*)surfaceParams->dataPtr);
        cv::cvtColor(cv_frame, cv_frame, cv::COLOR_RGBA2BGR);

        /* get input rect */
        if (frame->objectsIn.numFilled) {
            objToTrack = &frame->objectsIn.list[0];
            std::cout << ">>> [detect]  class idx: " << ((NvDsObjectMeta*)(objToTrack->pPreservedData))->class_id << std::endl;
            confidence  = objToTrack->confidence;
            class_idx   = objToTrack->classId;
            rect.x      = (int)(objToTrack->bbox.x);
            rect.y      = (int)(objToTrack->bbox.y);
            rect.width  = (int)(objToTrack->bbox.width);
            rect.height = (int)(objToTrack->bbox.height);
        }
        else {
            std::cout << ">>> [track]  class idx: " << ((NvDsObjectMeta*)(objToTrack->pPreservedData))->class_id << std::endl;
            objToTrack->confidence  = confidence;
            objToTrack->classId     = class_idx;
            objToTrack->bbox.x      = rect.x;
            objToTrack->bbox.y      = rect.y;
            objToTrack->bbox.width  = rect.width;
            objToTrack->bbox.height = rect.height;
        }

        /* tracking */
        NanoTrackMap.at(frame->streamID)->update(cv_frame, rect);
        confidence = NanoTrackMap.at(frame->streamID)->tracking_score;

        /* output */
        trackedObj->confidence                     = confidence;
        trackedObj->classId                        = (uint16_t)class_idx;
        trackedObj->trackingId                     = 0;

        trackedObj->bbox.x                         = (float)rect.x;
        trackedObj->bbox.y                         = (float)rect.y;
        trackedObj->bbox.width                     = (float)rect.width;
        trackedObj->bbox.height                    = (float)rect.height;

        // trackedObj->age                            = frame->frameNum;
        trackedObj->age = 1;
        trackedObj->associatedObjectIn             = objToTrack;
        trackedObj->associatedObjectIn->doTracking = true;

        trackedObjList->streamID     = frame->streamID;
        trackedObjList->frameNum     = frame->frameNum;
        trackedObjList->valid        = true;
        trackedObjList->list         = trackedObj;
        trackedObjList->numFilled    = 1;
        trackedObjList->numAllocated = 1;

        std::cout << "track cls_idx: " << trackedObj->classId << std::endl;
    }

    return NvMOTStatus_OK;
}

# message 
Using file: config/config_yolov5_nanotrack.yml
>>> init resource.
>>> create elements.
>>> set element attribute.
>>> link elements.
>>> add bus watch.
Running...

(deepstream-yolov5-dcf:99982): GStreamer-WARNING **: 16:19:17.032: (../gst/gstinfo.c:556):gst_debug_log_valist: runtime check failed: (object == NULL || G_IS_OBJECT (object))
gstnvtracker: Loading low-level lib at /home/yang/Codes/Git/nanotracktrt/build/libNanoTrack.so
[NanoTrack Initialized]
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
0:00:05.360954490 99982 0x556d3fc2dd90 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<yolov5-infer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/yang/Codes/Git/deepstream-yolov5-track/build/infra.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           25200x4         
2   OUTPUT kFLOAT scores          25200x1         
3   OUTPUT kFLOAT classes         25200x1         

0:00:05.423019752 99982 0x556d3fc2dd90 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<yolov5-infer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/yang/Codes/Git/deepstream-yolov5-track/build/infra.engine
0:00:05.425821502 99982 0x556d3fc2dd90 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<yolov5-infer> [UID 1]: Load new model:/home/yang/Codes/Git/deepstream-yolov5-track/config/yolov5.yml sucessfully
load filename from: /home/yang/Codes/Experiment/nano-track-trt/weights/nanotrack_backbone.engine
INFO: The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value.
INFO: Loaded engine size: 3 MiB
INFO: [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +1, now: CPU 0, GPU 67 (MiB)
INFO: [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +2, now: CPU 0, GPU 69 (MiB)
WARNING: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
deserialize done!
load filename from: /home/yang/Codes/Experiment/nano-track-trt/weights/nanotrack_head.engine
INFO: The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value.
INFO: Loaded engine size: 1 MiB
INFO: [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +1, now: CPU 0, GPU 70 (MiB)
INFO: [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 0, GPU 70 (MiB)
WARNING: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
deserialize done!
>>> [detect]  class idx: 4
track cls_idx: 4
>>> [track]  class idx: 4
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [detect]  class idx: 4
track cls_idx: 4
>>> [track]  class idx: 4
track cls_idx: 4
>>> [track]  class idx: 4
track cls_idx: 4
>>> [track]  class idx: 4
track cls_idx: 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4
>>> [track]  class idx: 0
track cls_idx: 4
gstnvtracker: obj 0 Class mismatch! 0 -> 4

is func void NvTrackerProc::fillMOTFrame in deepstream/sources/gst-plugins/gst-nvtracker/nvtracker_proc.cpp filled motFrame for nvdsosd plugin?

i tried yolov5 + dcf, when i set interval=30, i got right result. but with my custom tracker, i got Class missMatch error, The configuration file is the same except for the dcf specific part.

It is for low level tracker.

Please implement customer tracker with below guide in: Gst-nvtracker — DeepStream 6.3 Release documentation

The output object attribute data NvMOTTrackedObj contains a pointer to the detector object (provied in the input) that is associated with a tracked object, which is stored in associatedObjectIn. You must set this to the associated input object only for the frame where the input object is passed in. For a pipeline with PGIE interval=1, for example:

Frame 0: NvMOTObjToTrack X is passed in. The tracker assigns it ID 1, and the output object’s associatedObjectIn points to X.
Frame 1: Inference is skipped, so there is no input object from detector to be associated with. The tracker finds Object 1, and the output object’s associatedObjectIn points to NULL.
Frame 2: NvMOTObjToTrack Y is passed in. The tracker identifies it as Object 1. The output Object 1 has associatedObjectIn pointing to Y.

like this? i keep a objToTrack struct for associatedObjectIn, when run detection, the objToTrack update by objectsIn in NvMOTFrame, else i update the “confidence”, “classId”, “bbox” based on tracker result on past frame.

Please check below parameter. Set associatedObjectIn to NULL if the inference is skipped.

pObjList->detectionDone = frameMeta.bInferDone ? true : false;

if the inference is skipped, i set associatedObjectIn to NULL, and set pObjList->detectionDone to true, it works fine, thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.