How to dump feature value while inferencing tensorRT model

Description

I need to get some outputs from several layers of resnet.
And I found Inference Code for TensorRT model.
What should I do to extract layer outputs from this engine?

def doInference(context, host_in, host_out, batchSize):
    engine = context.engine
    assert engine.num_bindings == 2

    devide_in = cuda.mem_alloc(host_in.nbytes)
    devide_out = cuda.mem_alloc(host_out.nbytes)
    bindings = [int(devide_in), int(devide_out)]
    stream = cuda.Stream()

    cuda.memcpy_htod_async(devide_in, host_in, stream)
    context.execute_async(bindings=bindings, stream_handle=stream.handle)
    cuda.memcpy_dtoh_async(host_out, devide_out, stream)
    stream.synchronize()

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:

Thanks!

You should register output nodes from the layers you want features. Export that model to TRT, do context.execute_async(bindings=bindings, stream_handle=stream.handle)
At the end of the forward pass your features should be available at the output buffers. Get them from device to host with cuda.memcpy_dtoh_async(host_out, devide_out, stream).

Be aware of the data size and allocate your memory accordingly.