Hi,
There are multiple inputs and outputs in one of our DL models, how to mark the input and output in TensorRT?
Hi,
There are multiple inputs and outputs in one of our DL models, how to mark the input and output in TensorRT?
Hi,
Here is a sample for your reference:
https://github.com/dusty-nv/jetson-inference/blob/master/tensorNet.cpp
Input is set from prototxt automatically.
Output can be marked with this function call:
network->markOutput("LayerXXX");
network->markOutput("LayerYYY");
...
Thanks.
Hi,
I have tried to mark multi output as marking single output, but it reports error when running.
Please see attached image for reference.
Hi,
Segmentation fault is usually caused by the illegal memory access.
Please noticed that the memory output from TensorRT is GPU memory.
You will need to copy it back to CPU before accessing it with CPU.
Thanks.
Hi,
I have successfully used the methods showed in the attachment in other cases without multi outputs, so I think the key is the multi output process problem
Can you help to check whether it is right to deal with multi output in this way?
Hi,
Now I have solved the issue with multi output, but the output results are not right.
How to get some layer’s weights data in TensorRT?
Hi,
We don’t have an API for that. But it should be identical to the input weight file.
Based on our experience, the issue usually occurs in network architecture rather than weight parsing.
Would you mind to check where the difference from by updating different output first?
Thanks.
Hi,
In our process, we need to get some layer’s weights then for processing.
Caffe can get the weights, TensorRT also has python API to get the weights; And in C ++ custom layer supporting, TensorRT has methods to store weights data, it is strange that TensorRT have no methods to get weights
So do you have some methods in TensorRT to get the weights data?
Hi,
There are two processes in TensorRT: create engine and inference
When creating an engine, the weight value is deserialized and pass to the corresponding layer.
For plugin, you can get it from the constructor:
FCPlugin(const Weights *<b>weights</b>, int nbWeights, int nbOutputChannels): mNbOutputChannels(nbOutputChannels)
You will need to save the value as a variable.
We don’t pass the weight information when inference since it’s not efficient.
Thanks.