I'm not able to set current parameter for headpose estimation model(roll,yaw,pitch) custom model

Please provide complete information as applicable to your setup.

**• Hardware Platform -----> GPU
**• DeepStream Version -----> 6.4
**• TensorRT Version-----> 8.6
**• NVIDIA GPU Driver Version -------> 545
**• Issue Type -------> new requirements

I am converted publicly available headpose model and want to infer into deepstream pipeline, in deepstream pipeline require .txt file where I need to mention all the parameters what I need , I will attach that …
the model I convert into .onnx and with onnx script I ran the model and got the out as well … I will attach the code as well
and
I inferred with .engine file also with out pipeline with one script it’s working fine and giving me output …

My problem is if I load the same model with deepstream pipeline and config file is like this

[property]
gpu-id=0
net-scale-factor=0.01735207357279195
offsets=123.675;116.28;103.53
model-color-format=0
model-engine-file=/opt/nvidia/deepstream/deepstream-6.4/nvodin23/models/assets/removeSidefaces/head_pose_v3_self.engine
batch-size=1
infer-dims=3;224;224

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
network-type=100
gie-unique-id=101
operate-on-gie-id=2
uff-input-blob-name=input.1
output-blob-names=306;310;314

Infer Processing Mode 1=Primary Mode 2=Secondary Mode

process-mode=2
output-tensor-meta=1
maintain-aspect-ratio=0

after that the output is giving “”“”“”
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 306 0
2 OUTPUT kFLOAT 310 0
3 OUTPUT kFLOAT 314 0

python3: nvdsinfer_context_impl.cpp:1446: NvDsInferStatus nvdsinfer::NvDsInferContextImpl::resizeOutputBufferpool(uint32_t): Assertion `bindingDims.numElements > 0’ failed.
Aborted (core dumped)
“”“”
error
How I can set correct config file and infer it properly ! !!
I need help ASAP .

I will attach my onnx and .engine file with script as well All in zip format !!!
nvidia_forum.zip (2.6 MB)

please refer to this topic.

@fanzh Thanks for replay I able to solve that problem now the model is inferencing But Only first layer 306 is showing in tensor-meta
I’m attaching the config file here ---->
[property]
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream-6.4/nvodin23/models/assets/removeSidefaces/side_face.engine
#dynamic batch size
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=0
num-detected-classes=3
output-blob-names=316;318;320
#0=Detection 1=Classifier 2=Segmentation 100=other
network-type=100

Enable tensor metadata output

output-tensor-meta=1
#1-Primary 2-Secondary
process-mode=2
gie-unique-id=100
operate-on-gie-id=2
net-scale-factor=1.0
#offsets=0.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

there another two layer is there for yaw and pitch … So If you can give any feedback ! @fanzh

did you use that nvdsinfer.patch in the topic above?

@fanzh
What I has done is model label I changed last output layer (11) ,(11) ,(1*1) for roll,yaw,pitch After that I able to inference the name now roll (316—> last layer),yaw( 318—> last layer) , pitch (320—> last layer). What is happening is only 316 means roll layer is coming as a output not other two layer

Our question " nvdsinfer.patch " I am not using that coz I change in model label [batch,output].

Now what I need to follow !

did you regenerate the model? do you mean the 306/310/314’s dimension is not 0 now? could you share a new log? if network-type is 100, please set output-tensor-meta to 1 and refer to sample deepstream-infer-tensor-meta-test, the inference output is processed in pgie_pad_buffer_probe.

0:00:04.329609771 1331153 0x55662887c8c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<removeSidefaces(‘face’, ‘tracker_face’, ‘removeSidefaces’)> NvDsInferContext[UID 100]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 100]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/xxxxx23/models/assets/removeSidefaces/side_face.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 316 1
2 OUTPUT kFLOAT 318 1
3 OUTPUT kFLOAT 320 1

0:00:04.451219265 1331153 0x55662887c8c0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<removeSidefaces(‘face’, ‘tracker_face’, ‘removeSidefaces’)> NvDsInferContext[UID 100]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 100]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/nvodin23/models/assets/removeSidefaces/side_face.engine
0:00:04.454592421 1331153 0x55662887c8c0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<removeSidefaces(‘face’, ‘tracker_face’, ‘removeSidefaces’)> [UID 100]: Load new model:/opt/nvidia/deepstream/deepstream-6.4/xxxxx23/models/assets/removeSidefaces/removeSidefaces.txt sucessfully

Yes, Network type is 100; and in config file I gave “output-tensor-meta=1”
only 316 layer output is coming coming other two is not ! @fanzh

let me check once with “deepstream-infer-tensor-meta-test” with that and get back here !

I’m attaching my headpose config file as well for you to check once !

[property]
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream-6.4/xxxxx23/models/assets/removeSidefaces/side_face.engine
#dynamic batch size
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=0
num-detected-classes=3
output-blob-names=316;318;320
#0=Detection 1=Classifier 2=Segmentation 100=other
network-type=100

Enable tensor metadata output

output-tensor-meta=1
#1-Primary 2-Secondary
process-mode=2
gie-unique-id=100
operate-on-gie-id=2
net-scale-factor=0.01735207357
offsets=123.675;116.28;103.53
maintain-aspect-ratio=1
#0=RGB 1=BGR 2=GRAY
model-color-format=0

Hi, @fanzh

Now In model wise I don’t have any problem. All layers are coming but I felt after the layers when I am taking the output I feel I’m doing some mistake (maybe) for that reason the output is coming is very wrong …

if “318” == tensor_yaw.layerName:
ptr1 = ctypes.cast(pyds.get_ptr(tensor.buffer), ctypes.POINTER(ctypes.c_float))
probs1 = np.array(np.ctypeslib.as_array(ptr1, shape=(tensor.dims.numElements,)), copy=True)
yaw_val=probs1[0]

Like this I’m doing to get the output

I running the same model .engine in my local with trt
giving me correct output!

My question is am I doing correct way to get the output for this ? @fanzh

you can get the inference result in NvDsInferLayerInfo, you need pyds.NvDsInferLayerInfo.cast to convert. please refer to a conversion sample.

Hi @fanzh ,
One more quick question I have
transform_test = transforms.Compose([transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])

I read the this post “How to set the correct config for a pytorch model in nvinfer?
how the calculation they are explaining I’m not able to understand properly, I need to do above pre-processing for headpose model …
can you little simplify that ?

yes, please set preprocessing parameters. please read the topic and plugin explanation doc.

Okey ! @fanzh This I will do that !

NvDsInferLayerInfo, you need pyds.NvDsInferLayerInfo.cast to convert. please refer to a conversion [sample]

there
if seg_user_meta and seg_user_meta.base_meta.meta_type ==
pyds.NVDSINFER_SEGMENTATION_META:
try:
# Note that seg_user_meta.user_meta_data needs a cast to
# pyds.NvDsInferSegmentationMeta
# The casting is done by pyds.NvDsInferSegmentationMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
except StopIteration:
break
# Retrieve mask data in the numpy format from segmeta
# Note that pyds.get_segmentation_masks() expects object of
# type NvDsInferSegmentationMeta
masks = pyds.get_segmentation_masks(segmeta)
masks = np.array(masks, copy=True, order=‘C’)

this is the code
You suggested instant of pyds.get_segmentation_masks -----> I can use pyds.NvDsInferLayerInfo.cast

l_user_class = obj_meta.obj_user_meta_list
while l_user_class is not None:
try:
# Note that l_user.data needs a cast to pyds.NvDsUserMeta
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
user_meta_class = pyds.NvDsUserMeta.cast(l_user_class.data)
except StopIteration:
break
if user_meta_class.base_meta.meta_type!= pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META:
continue

tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta_class.user_meta_data)

for i in range(tensor_meta.num_output_layers):
    layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
    if "318" == layer.layerName:
        ptr1 = ctypes.cast(pyds.get_ptr(layer.buffer), ctypes.POINTER(ctypes.c_float))
        probs1 = np.array(np.ctypeslib.as_array(ptr1, shape=(layer.dims.numElements,)), copy=True)
        yaw_val=probs1[0]

Like this I was doing
I’m not clear where you are saying to make changes … If you are more clear about it That would be great …

there is a sample for how to parse NvDsInferTensorMeta.

I have to solve the problem. It’s great Thanks @fanzh for your help You can close the topic.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.