Can't get Single-View 3D Tracking (SV3DT) to work

I am having trouble getting SV3DT to work. When I add ObjectModelProjections and the corresponding camera.yaml files to my tracker configuration it no longer shows any tracking results

The post at Having issues with Single-View 3D Tracking (SV3DT) is similar and was closed without a real resolution

I’m trying to get this same scenario to work but can’t find a single configuration file on the web that references ObjectModelProjection or offers up what a complete and working tracking config file, camera.yml, and code to set up a probe and read the foot location. Is a sample or the mentioned blog post coming soon. Can you share some parts from that now?

The post above pretty much mirrors my setup but I also added outputVisibility, outputFootLocation, and outputConvexHull to my tracker config file.

  outputVisibility: 1
  outputFootLocation: 1
  outputConvexHull: 1

My pipeline still runs fine and I get frames from the probe I have set up on the src pad of the tracker, but there is nothing in the ObjectMetaList. I thought I would still at least get detections flowing through (detector_bbox) but I get nothing in the object list with these mods to a working project.

• Hardware Platform - Jetson Orin AGX
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only) - 6.0 DP

I will check internally and feedback here. Thanks.

We will release a 3D tracking sample and step-by-step guide soon. Thanks.

Do you have an ETA? This is the same response to the prior post made over 2 months ago. Are we talking months or days?

Any updates on this? The feature is the first feature highlighted on the what’s new for Deep Stream 6.4 over 3 months ago and I’ve seen no signs of any customer getting this to work. Would be much better to hold off on a feature until it is ready vs wasting our time to upgrade for features lacking documentation, samples, and support to get it working.

1 Like

Sorry that more detailed info was missing. We were in middle of preparing a blogpost and doc update, but got delayed due to other higher priority works. They are in the pipeline, so will be available soon, so pls stay tuned. Below is what we can provide for now as some more details. To make the transition easy for users who are familiar with OpenCV, we use a similar approach like below:

The 3x4 Camera Projection Matrix is also called as simply the camera matrix, which is a 3x4 matrix that converts a 3D world point to a 2D point on camera image plane based on a pinhole camera model. More detailed and general information about the camera matrix can be found in various sources that deal with the computer vision geometries and camera calibration, including OpenCV’s documentation on Camera Calibration (OpenCV: Camera Calibration and 3D Reconstruction)

For projectionMatrix_3x4 in a camera model file (e.g., camInfo-01.yml), the principal point (i.e., (Cx, Cy)) in the camera matrix is assumed to be at (0, 0) as image coordinates. But, the optical center (i.e., (Cx, Cy)) is located at the image center (i.e., (img_width/2, img_height/2)). Thus, to move the origin to the left-top of the camera image (i.e., pixel coordinates), SV3DT internally adds img_width/2, img_height/2) after the transformation using the camera matrix provided in projectionMatrix_3x4.

ETA? 7.0 seems announced before you have shown working 6.4 features. Should we be skipping 6.4 and going right to 7.0? When is 7.0 available?

Any updates on working sample and blog post?

Yes, please check the blog here: Mitigating Occlusions in Visual Perception Using Single-View 3D Tracking in NVIDIA DeepStream | NVIDIA Technical Blog
And the sample here: deepstream_reference_apps/deepstream-tracker-3d at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.