Please provide complete information as applicable to your setup.
**• Hardware Platform -----> GPU
**• DeepStream Version -----> 6.4
**• TensorRT Version-----> 8.6
**• NVIDIA GPU Driver Version -------> 545
**• Issue Type -------> new requirements
I am converted publicly available headpose model and want to infer into deepstream pipeline, in deepstream pipeline require .txt file where I need to mention all the parameters what I need , I will attach that …
the model I convert into .onnx and with onnx script I ran the model and got the out as well … I will attach the code as well
and
I inferred with .engine file also with out pipeline with one script it’s working fine and giving me output …
My problem is if I load the same model with deepstream pipeline and config file is like this
after that the output is giving “”“”“”
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 306 0
2 OUTPUT kFLOAT 310 0
3 OUTPUT kFLOAT 314 0
python3: nvdsinfer_context_impl.cpp:1446: NvDsInferStatus nvdsinfer::NvDsInferContextImpl::resizeOutputBufferpool(uint32_t): Assertion `bindingDims.numElements > 0’ failed.
Aborted (core dumped)
“”“”
error
How I can set correct config file and infer it properly ! !!
I need help ASAP .
I will attach my onnx and .engine file with script as well All in zip format !!! nvidia_forum.zip (2.6 MB)
@fanzh Thanks for replay I able to solve that problem now the model is inferencing But Only first layer 306 is showing in tensor-meta
I’m attaching the config file here ---->
[property]
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream-6.4/nvodin23/models/assets/removeSidefaces/side_face.engine #dynamic batch size
batch-size=1
@fanzh
What I has done is model label I changed last output layer (11) ,(11) ,(1*1) for roll,yaw,pitch After that I able to inference the name now roll (316—> last layer),yaw( 318—> last layer) , pitch (320—> last layer). What is happening is only 316 means roll layer is coming as a output not other two layer
Our question " nvdsinfer.patch " I am not using that coz I change in model label [batch,output].
did you regenerate the model? do you mean the 306/310/314’s dimension is not 0 now? could you share a new log? if network-type is 100, please set output-tensor-meta to 1 and refer to sample deepstream-infer-tensor-meta-test, the inference output is processed in pgie_pad_buffer_probe.
0:00:04.329609771 1331153 0x55662887c8c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<removeSidefaces(‘face’, ‘tracker_face’, ‘removeSidefaces’)> NvDsInferContext[UID 100]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 100]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/xxxxx23/models/assets/removeSidefaces/side_face.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 316 1
2 OUTPUT kFLOAT 318 1
3 OUTPUT kFLOAT 320 1
0:00:04.451219265 1331153 0x55662887c8c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<removeSidefaces(‘face’, ‘tracker_face’, ‘removeSidefaces’)> NvDsInferContext[UID 100]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 100]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/nvodin23/models/assets/removeSidefaces/side_face.engine
0:00:04.454592421 1331153 0x55662887c8c0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<removeSidefaces(‘face’, ‘tracker_face’, ‘removeSidefaces’)> [UID 100]: Load new model:/opt/nvidia/deepstream/deepstream-6.4/xxxxx23/models/assets/removeSidefaces/removeSidefaces.txt sucessfully
Yes, Network type is 100; and in config file I gave “output-tensor-meta=1”
only 316 layer output is coming coming other two is not ! @fanzh
let me check once with “deepstream-infer-tensor-meta-test” with that and get back here !
Now In model wise I don’t have any problem. All layers are coming but I felt after the layers when I am taking the output I feel I’m doing some mistake (maybe) for that reason the output is coming is very wrong …
Hi @fanzh ,
One more quick question I have
transform_test = transforms.Compose([transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
I read the this post “How to set the correct config for a pytorch model in nvinfer?”
how the calculation they are explaining I’m not able to understand properly, I need to do above pre-processing for headpose model …
can you little simplify that ?
NvDsInferLayerInfo, you need pyds.NvDsInferLayerInfo.cast to convert. please refer to a conversion [sample]
there
if seg_user_meta and seg_user_meta.base_meta.meta_type ==
pyds.NVDSINFER_SEGMENTATION_META:
try:
# Note that seg_user_meta.user_meta_data needs a cast to
# pyds.NvDsInferSegmentationMeta
# The casting is done by pyds.NvDsInferSegmentationMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
except StopIteration:
break
# Retrieve mask data in the numpy format from segmeta
# Note that pyds.get_segmentation_masks() expects object of
# type NvDsInferSegmentationMeta
masks = pyds.get_segmentation_masks(segmeta)
masks = np.array(masks, copy=True, order=‘C’)
this is the code
You suggested instant of pyds.get_segmentation_masks -----> I can use pyds.NvDsInferLayerInfo.cast
l_user_class = obj_meta.obj_user_meta_list
while l_user_class is not None:
try:
# Note that l_user.data needs a cast to pyds.NvDsUserMeta
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
user_meta_class = pyds.NvDsUserMeta.cast(l_user_class.data)
except StopIteration:
break
if user_meta_class.base_meta.meta_type!= pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META:
continue
tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta_class.user_meta_data)
for i in range(tensor_meta.num_output_layers):
layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
if "318" == layer.layerName:
ptr1 = ctypes.cast(pyds.get_ptr(layer.buffer), ctypes.POINTER(ctypes.c_float))
probs1 = np.array(np.ctypeslib.as_array(ptr1, shape=(layer.dims.numElements,)), copy=True)
yaw_val=probs1[0]
Like this I was doing
I’m not clear where you are saying to make changes … If you are more clear about it That would be great …