1,What does output-blob-names mean? The official documentation on output-blob-names only says one sentence “output layer name”, which is really not very clear. Can you help me to explain, I don’t particularly understand, is it used to define neural network layers in the configuration? Or?
2,What should I understand about the difference between output-blob-names and parse-bbox-func-names?
Yes. “output-blob-names” is to fill the model network output layers’ names. It is not to define but to tell nvinfer some basic information of the model.
“parse-bbox-func-names” is the callback postprocessing function name for detector. It is only for customized postprocessing. Please refer to the nvinfer source code for details.
/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer and /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer
Please read the code. It will be more clear and much better than raising topic in the forum.
1,Is the network layer set by “output-blob-names” already pre-defined in deepstream?
2,Where should I go to see all the pre-defined network layers in deepstream?so I can replace them and test them, in order to find a layer more suitable for my business scenario, after all the layers given in the case are very limited.
3,For example: output-blob-names=MarkOutput_0, where do I see the definition of MarkOutput_0?
DeepStream never trains any model. If you want the model training tool, please refer to TAO toolkit. TAO Toolkit | NVIDIA Developer
DeepStream can also adapt to caffe, ONNX, … These models are trained by Caffe https://caffe.berkeleyvision.org/, PyTorch https://pytorch.org/,…
Please ask the guy who trains and provides the model to you. If it is TAO pre-trained models, you can consult in TAO forum. Latest Intelligent Video Analytics/TAO Toolkit topics - NVIDIA Developer Forums
Please check with the guy who trains and provides the model to you.
1,Does the model of the network layer output defined by “output-blob-names” only support the caffe framework?
2,If I want to use pytorch, can’t I define it by “output-blob-names”, but only by customizing “engine-create-func-name”? to achieve this?
It works with caffe modle, uff model,…
How much do you know about TensorRT? TensorRT SDK | NVIDIA Developer
“output-blob-names” is not to define the output layers. The model is constructed and trained outside DeepStream, you need to fill “output-blob-names” to tell DeepStream(TensorRT) the layer names of the model. DeepStream never defines anything of the model.
“engine-create-func-name” is “Name of the custom TensorRT CudaEngine creation function”. Please refer to Using a Custom Model with DeepStream — DeepStream 6.1.1 Release documentation
Please read the user manual from the beginning. Welcome to the DeepStream Documentation — DeepStream 6.1.1 Release documentation
When I use yolov5s in deepstream, I don’t fill in “output-blob-names” to tell DeepStream (TensorRT) the layer name of the model. Instead, I directly configured the function using the post-processing parameter “parse-bbox-func-name”, why don’t custom ones like yolov5s need to configure “output-blob-names”?
Is your yolov5s model a ONNX model?
In yolov5s, I have converted to wts and cfg files that are entered through this configuration file.
custom-network-config=yolov5s_fast.cfg
model-file=yolov5s_fast.wts
If it is a caffe model,once the engine is generated through deepstream, I can get the data directly with the name of the output layer by output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
For ONNX model or other pytorch models, “output-blob-names” is not necessary. For Uff and Caffe models, the wrong “output-blob-names” will cause the failure of building model engine. You will not get anything if “output-blob-names” is not filled correctly with Uff or Caffe models. Please ask the guy who provides the model to you for the output layer names.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.