**• Jetson Nano **
**• DeepStream 5.0 (GA) **
**• JetPack 4.4 **
**• TensorRT 18.104.22.168 **
**• question/bug ? **
It is possible to find out more precisely the operation of the Line Crossing feature in Gst-
I tried to do the deployment in real conditions, during 2 days of operation (movements of about 300 people in the interior) and …
Because, despite stable tracking and a relatively short interval of pgie (3) on jetson nano, with one input stream, it was very often the case that the line cross was simply not counted. Nevertheless, the visual OSD plotted the tracking number as well as the frame of the detected person, while crossing the line.
When I evaluated the all-day statistics, the deviation was almost 40% - compared to the actual state - which is quite bad, apart from the fact that the detector itself behaved quite stable.
I would be interested in e.g. thus, as the quality of the evaluation can be influenced by the setting of the “direction” of the vector (eg its length, orientation deviation, operation of the “mode” parameter), fps (because in this setup it was 15/3 = 5 detection per second, so the tracker deviation was negligible) , another tracker configuration which is better (I used klt. - with satisfactory performance)?
SW configs (partial)
just one rtsp stream (fullhd, 15fps)
What is your question?
The nvinfer interval will impact inferrence result FPS. So it will impact the tracking and analystics result. Have you tried different mode value for line crossing/direction detection? There are ‘loose’, ‘strict’ and ‘balance’ to be chosen.
yes a did try those parameters. But problem what i have is, that even when parameter is “loose” and subject is ~ moving in direction defined in line-crossing, there are situations (10-th percent overall) when there is no line count. I also tried several interval parameter setting (to be as low as possible), it was better but still not good.
So I’m looking for an explanation of how LC functionality behaves and how it can be influenced by direction or other parameters, because there is very little in the documentation.
Do you know that current gst-analytics algorithm only count the line crossing when the bounding box bottom line middle point crossing the line? Do you count line crossing with the same criteria?
Can you tell us how did you do the statistics and calculate the deviation of line crossing? What is your criteria of judging whether the object has crossed the line?
imagine the situation.
The object moves horizontally from left to right. In the center of the image is a vertical line (line crossing) and some horizontal line represents the “direction”.
Due to the speed of movement of the object and the framerate, the position on the bounding box in the image jumps. This means that at one point it is to the left of the line and then at some point to the right behind the crossing line. We only shorten these jumps with the interval parameter and the precision of the tracker. However, if the object moves fast enough, these changes are also greater.
And now the questions.
- how does the gst-analytics algorithm deal with this difference? doesn’t it happen that if these “jumps” are too big, then simply crossing the line will not be counted? Isn’t there a threshold parameter that affects this?
- is this behavior related in any way to the length of the vector that defines the “direction”?
- if the object does not move horizontally (in our example), but at some angle - what is the boundary angle when the crossing is still counted
- how do the ‘loose’, ‘strict’ and ‘balance’ parameters affect this behavior? Because I was convinced that loose would always count everything, but obviously not.
• Doesn’t it happen that if these “jumps” are too big, then simply crossing the line will not be counted? Isn’t there a threshold parameter that affects this?
->Due to jumps The tracking id might change if that is happening then we might miss the crossing. If possible can user share a video of the failure. Does reducing tracking distance improve the situation? Also are there occlusions, would like to know the camera placement as well, as it will affect the results.
• is this behavior related in any way to the length of the vector that defines the “direction”?
->Direction length is just for getting the direction vector, it shouldnt be affected by speed
• if the object does not move horizontally (in our example), but at some angle - what is the boundary angle when the crossing is still counted
->For loose it should count all crossings, strict the cosine between object direction and configured direction should be 0.8, for balanced its 0.5
• how do the ‘loose’, ‘strict’ and ‘balance’ parameters affect this behavior? Because I was convinced that loose would always count everything, but obviously not.
-> Yes loose should count everything, would require additional inputs, i.e. camera placement sample video for it, did you tried changing tracker to nvdcf?
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
It is better to provide your test video, model and configuration files to us so that we can get more detailed information for this issue.