Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
The model configuration file is as follows:
[property] gpu-id=0 net-scale-factor=1 offsets=124;117;104 tlt-model-key=nvidia_tlt tlt-encoded-model=final_model.etlt labelfile-path=labels.txt model-engine-file=final_model.etlt_b8_gpu0_fp16.engine infer-dims=3;80;160; uff-input-blob-name=input_1 batch-size=8 ##1 Primary 2 Secondary process-mode=1 model-color-format=0 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 ##0=Detection 1=Classifier 2=Segmentation network-type=1 num-detected-classes=2 interval=0 gie-unique-id=2 output-blob-names=predictions/Softmax classifier-threshold=0.2
I set：“process-mode=1” and “network-type=1”. Then, how to extract the result data of image classification in the program ?
At present, we are working on a traffic accident recognition scene, and want to directly classify the frame images of the entire video screen, but the program can not extract the image information or labels of primary classification.