Does deepstream infer any model(Caffe) supported by TensorRT?
List the models that supported for detection and classification
How do we add a static test to display? example: time, company name, etc
Sample code for saving the image from detected objects. example only cars from vehicle detection.
Can we assign DLA 0 to primary model and DLA 1 to secondary model?
Hi,
1. You can find some Deepstream model in /opt/nvidia/deepstream/deepstream-4.0/samples/models/.
They are Caffe-based detection and classification model.
Deepstream use TensorRT library for inference. So you can run Deepstream with the model fully-supported by the TensorRT.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html
2. We don’t have a sample code for saving the ROI image.
But you can find some information in our dsexample example:
It crops image with the bounding box detected by the primaryGIE.
// Crop and scale the object
if (get_converted_mat (dsexample,
surface, frame_meta->batch_id, &obj_meta->rect_params,
scale_ratio, dsexample->video_info.width,
dsexample->video_info.height) != GST_FLOW_OK) {
// Error in conversion, skip processing on object. */
continue;
}
3. YES. For example: config_infer_primary.txt
[property]
...
enable-dla=1
use-dla-core=0
Thanks.
[s]For multiple input sources, it is missing labels, is this common?
Issue resolved
Thanks
When enabling DLA, do we need to comment out GPU?