Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) . Jetson Nano • DeepStream Version 6.0.1 • JetPack Version (valid for Jetson only) 4.6.3 • TensorRT Version 8.2.1.8 • NVIDIA GPU Driver Version (valid for GPU only) . CUDA: 10.2.300 • Issue Type( questions)
I build FaceRecognition pipeline using the FacedetectIR model, and I’m trying to access the output tensors of the model, but it returns None, here is the code
can model work fine by third party tool? please refer to sgie_pad_buffer_probe of deepstream_infer_tensor_meta_test.cpp in deepstream SDK, this sample will parse NvDsInferTensorMeta.
Ok, I’m going to check, but I’m interested in Python more than C++, so Is there a clear way or tutorial about dealing with python. Actually i barely find solutions for what i’m facing with Deepstream. @fanzh
sorry for the late reply, there is no ready python sample in deepstream_python_apps, as you know, Deepstream SDK is a C lib, python use python binding to access Deepstream SDK.
you can port this code:
info->buffer = meta->out_buf_ptrs_host[i];
if (use_device_mem && meta->out_buf_ptrs_dev[i]) {
cudaMemcpy (meta->out_buf_ptrs_host[i], meta->out_buf_ptrs_dev[i],
info->inferDims.numElements * 4, cudaMemcpyDeviceToHost);
}
@fanzh,
I think that there is an example here in deepstream_ssd_parser, at line 268, i just followed that example but can’t get back the tensor as mentioned above.
Should i develop my solution in C++ in that case, or there is any clear way for doing that in Python? and i have to do that in C++, is there any clear documentation about how to create the plugins?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
In your python code above, it seemed there is no “info->buffer = meta->out_buf_ptrs_host[i];” logic.
please refer to doc1 ,doc2 and deepstream_infer_tensor_meta_test.cpp.