1- I want to know how the deep stream sdk can efficient for custom application, I know we can train the models with TLT on custom dataset and then deploy that model on deep stream, and that show me best result, but showing the results on screen isn’t enough in the business,maybe I want to crop the ROI and passed into another model, How flexible is it?
2- In my opinion, deep stream can’t efficient for custom business, is it possible to add this sdk into your project? If we want to when we see unknown object and the system active alarmed, is it possible? in my opinion, the deep stream sdk is only for to show the capability of that device not be expendable to custom project, right?
3- Suppose I trained a detection model (Face Detection) with TLT and deployed that model on deep stream and I want when the system see some people save some where, Is it possible in deep stream?
4- In the deep stream python apps, I see only ssdparser as detector, It is only supported that model? If I want to deploy detectnet_v2 detector Is it possible with python samples? If so, Is it work with ssdparserr sample?
5- Is it possible to use some plugins like tracker or decoder , … in custom python applications?
6- Some plugins used Hardware of jetson nano like decoder/encoder/scaling, … I want to know other plugins like tracker , … are done on CPU or has special hardware processing for that purpose?