I am working with DeepStream 6.2, using the Python bindings, on a Tesla T4.
I noticed a few issues with how the directions of the nvdsanalytics plugin are computed. I’ve then done some testings and I’d like to share here my findings.
Method:
I artificially created a video with a black background and a car moving in different directions. The directions where the car is moving can be express as different angles: from 0 degrees to 345 degrees with steps of 15 degrees: 0, 15, 30, 45, 60, …, 315, 330, 345. Have a look at the video, it’s easier to see it than to explain it.
I added 2-3 different directions to the nvdsanalytics plugins (see video). The directions are
From left to right (0 degrees)
Diagonally, from the bottom-left corner to the top-right corner (45 degrees)
From the bottom to the top of the screen (90 degrees)
Note: the video also defines some region of interests. They are as big as the entire video. Please disregard them. Focus only on the directions.
The attachment includes both video and nvdsanalytics configuration.
The following are the issues that I found.
There is no difference between strict, balanced, and loose modes
I processed the same video, using the same settings for nvdsanalytics except for the mode, and I noticed no difference in how the directions are computed when changing the mode.
A direction is assigned even when the object is just slightly moving in that direction
You can have a look at the video. But when the car is moving to a direction almost perpendicular (60 degrees) to the left-to-right direction, it is still assigned to the left-to-right direction.
*It seems that multiple directions can be assigned to the object (looking at the video) but the Python wrapper provides only one direction
You can see that in the video the nvdsosd plugin prints multiple directions. However, the Python instance of NvDsAnalyticsObjInfo only provides a single direction.
Can you help figure out a way to fix the issues?
Is there a way to access the direction computed by nvdsanalytics in the form of a vector or angle? Accessing the angle would provide a more flexible solution.
as the doc nvdsanalytics said, the “loose”,“balanced” and “strict” are three direction tolerance level. could you highlight this question? which clip should have a different result?
because the object is moving in that direction, it is reasonable to have a direction.
it is known bug, when an object has two directions, object’s display_text has all direction information, but there is only one direction information in NvDsAnalyticsObjInfo’s dirStatus. currently nvdsanalytics plugin is not opensource, but we have fixed this bug internally. Please follow our version updates.
could you share your use scenario? do you mean an angle compared with all direction lines?
Thank you for your reply. To answer your questions:
If you open the zip file I attached to my initial message you fill find inside it a folder called mode. This folder contains 3 videos which shows the output produced by nvdsosd plugin when trying the 3 different modes (loose, balanced, strict) on the same video with the same directions. You will see there is no difference between the modes.
While I agree with you, I think there should be differences between loose, balanced, strict. Also, I don’t think many people would find useful to assign a direction to an object whose trajectory forms a 89 degree angle with the direction. I assume that’s why Nvidia engineers came out with these 3 modes. However, as stated above, the 3 modes do not work: they all give the same results to me.
I will wait for the next update.
If you assigned an angle to every object, there would be no need to define 3 difference modes (strict, balanced, loose). Consider the angles in the image attached. If an object is moving left to right, it will have an angle of 0 degrees. If the object is moving left to right, but also slightly towards the top, it will have an angle of about 15 degrees. As a user, I can then decide how to set my threshold and define whether for me, moving on a 15 degrees line would be considered left-to-right direction, or not.
I would expect it in many points. Just to make an example, between 1:46 and 2:03 the car is reported moving in direction LR (left to right - as shown by the arrow in the video). Since the car is just slightly moving left to right, I would expect to see different results between the 3 modes.
sorry for the late reply. three modes have different checking thresholds, “strict” mode is strictest. if there is a directon in strict model, the other two modes should also have the direction.
from the log between 1:46 and 2:03, did you always see a direction from left to right? or could you proivde the input video which only has a car? Thanks! we will have a try.
I understand that the directions will still be the same for all the modes: strict, balanced, loose. However, for the time between 1:46 and 2:03 I would expect the direction to be captured by the loose mode but not by the strict.
Attached the input video that you can use for testing. Let me know if you need other information. video_directions.h264 (478.7 KB)
Sorry for the late reply,direction check will only support one mode. we will update the doc. If want to know the angle, you can get it by moving direction and checking direction.
Hey @fanzh as of now there’s not really a way to know the angle since I could easily create 5 different lines which would all be assigned to an object at the same time. Or, to be more precise, only of them will be assigned (not sure which one).
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
In theory, you can get the center points of the two consective bboxes, the two points can form a line, you can get the angle from the the line and the crossing line.