If I’m getting false detections of people yet the camera is looking at a chair or trees and the confidence values are around 0.935… what is the best way to handle that?
Increasing the confidence threshold is not practical as the model is already very certain that it’s a person.
May I know the false alarm happen frequently or just a few.
If it occurs very often, it’s recommended to check if the color format is well set or not first.
I have ‘model-color-format=0’ set - which is just the default provided in the deepstream sample config files. Are you suggesting that this should be changed?
I do notice during a bright sunny day I rarely get false detections, but when its a bit dull like overcast conditions or around dawn/dusk when its still very much light but a little darker than during the main part of the day - this is when I get lots of false detections. It will continually see people and bikes with very high confidence in trees or an empty table and chairs.
Once it gets dark then the IR turns on with my cameras and the false detections go away. The model behaves quite good under IR - no false detections but not quite as good accuracy as day time. I’m surprised it works at all under IR to be honest. ;-)
Would fine-tuning the model in slightly low-light conditions help ?
I have attached a sample image showing a detection of a bicycle - yet its just our backyard table.
In this image you can also see some trees toward the top-center - it often sees people in there with 97% + confidence. As you can see - its definitely not dark… but not bright and sunny either.
The pic looks low res only because I took it as a screen shot off my mobile phone. I have created a system where video clips are records and pushed to the cloud on detections and a mobile app to view them.
An update… Bright and sunny at the moment and I’m getting a bicycle detected in the view I attached above just to the right of that bounding box with 98.59% confidence - so there’s just no way to filter these out.
I find that with this model almost every detection is in the 90%+ area and only rarely do I see lower confidence detections.
Any ideas on the best way to avoid these high-confidence false detections? Could fine-tuning the model help here or is it just something you have to live with with a model that been pruned right down to a minimum for performance?
I will look into TLT. For reference, which model and training data set was used for the standard deepstream resnet10? How can I build on that one with TLT?
Maybe for better accuracy I could move to a resnet18 - but it would be good to have it trained in the same fashion as the standard deepstream model.
There’s is another discussion on here about the confidence values and a fix was provided to the pgie code. Previously the library behind pgie was hard-coding the confidence to 0. With the patch it puts the confidence in but it only works when you use the dbscan method for the bounding boxes.
Since I made this patch I now get confidence/threshold values.
So now you are saying that they are garbage values? Really. Why was that patch provided? It does not make sense to me.
Tuning the threshold values blind without seeing the actual detected values is too difficult.
We just want to be able to fine tune the resnet 10 model. If you could describe the training set we could at least attempt to train with TLT.
Was the deepstream resnet10 cafe model trained on the kitti dataset? as you would know -I can’t just train the resnet10 deepstream model as it does not work with TLT. I will have to start training from scratch on something like kitti.