• Hardware Platform (Jetson / GPU) • DeepStream Version - 6.1 • TensorRT Version 8.6.1.6-1+cuda12.0 • NVIDIA GPU Driver Version (valid for GPU only) - 535 • Issue Type - question
Hello! I am trying to use torchvision ResNet50 model for simple deepstream frames classifier.
Converting to .onnx model using this script:
import torch
import torchvision.models as models
backbone = models.resnet50(pretrained=True)
backbone.eval()
# Set the input shape
dummy_input = torch.randn(1, 3, 224, 224)
# Export the model to ONNX format
torch.onnx.export(backbone, dummy_input, "./models/resnet50.onnx")
The engine file is built successfully, but the classes are not displayed on the video and there is also nothing in the metadata. Perhaps I’m doing something wrong?
I read medata from PadProbe as follows:
def sink_pad_buffer_probe(self, pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
# The casting is done by pyds.NvDsFrameMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
l_cls = frame_meta.obj_meta_list
while l_cls is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
cls_meta = pyds.NvDsObjectMeta.cast(l_cls.data)
if cls_meta is not None:
print(cls_meta.class_id)
except StopIteration:
break
try:
l_cls = l_cls.next
except StopIteration:
break
try:
l_frame = l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
deepstream nvinfer only supports this kind of pre-processing y = net scale factor*(x-mean). please refer to explanation.
nvinfer plugin and low level lib are opensource. you can modify the code to customize, or you can use nvdspreprocess to customize preprocessing. please refer to doc and sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test/.
Thanks for the tip! I tried to include the preprocessing plugin in the pipeline and gave it the config from /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-preprocess-test/test/config_preprocess_classifier_resnet50.txt.
There is now a green bbox in the image, however obj_meta is still null :(
I get the metadata according to the official deepstream examples, but they are empty. The green square simply appears in the corner of the screen and does not change in any way, regardless of the class of the object in the image. There may be a problem with the preprocessing configuration?
def frame_sink_pad_buffer_probe(self, pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
l_obj = frame_meta.obj_meta_list
while l_obj is not None: # Value here is null
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
try:
l_obj = l_obj.next
except StopIteration:
break
try:
l_frame = l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
Shouldn’t PadProbe come after inference? Shouldn’t it be added to sink? I tried different things, src and sink for inference. Both src and sink for nvosd. The result did not change.
the probe function should be added after inference.
nvinfer plugin is opensource. you can add log in attach_metadata_classifier of DeepStream to check if there is object. especially you need to rebuild and replace /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so.
or you can provide the onnx model ,code and simplified code. let me have a try.
Sure! Here is the simplified code to get the model and run the pipeline. I would like to try to solve the problems using the correct settings instead of code. If it is possible of course. resnet50_issue.zip (90.5 MB)
the green rectangle(0,0,100,100) is ROI area, not the object bboxes. I have run the code with add log in C code attach_metadata_classifier. the model give null result.
deepstream nvinfer supports pre-processing y = net scale factor*(x-mean), please refer to config_infer_secondary_carcolor.txt in DeepStream SDK for how to set this value. net-scale-factor=1.0 is wrong.
from the python code “np.argmax”, you need to do argmax after inference. please refer to sample deepstream-infer-tensor-meta-test in DeepStream SDK, if network-type is 100, the inference results will be attached in meta, then you can process inference results in probe function.
Hi, I’m trying to get metadata using the method described in deepstream-infer-tensor-meta-test. However, the buffer still contains null. I am using the following code and config.
def frame_sink_pad_buffer_probe(self, pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
# The casting is done by pyds.NvDsFrameMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
l_user = frame_meta.frame_user_meta_list
while l_user is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
user_meta = pyds.NvDsUserMeta.cast(l_user.data)
print(f"obj_meta: {user_meta}")
tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
info = pyds.NvDsInferLayerInfo.cast(tensor_meta.output_layers_info(0))
print(f"Output layer info")
print(f"name: {info.layerName}")
print(f"{tensor_meta.out_buf_ptrs_dev}")
except StopIteration:
break
try:
l_user = l_user.next
except StopIteration:
break
try:
l_frame = l_frame.next
except StopIteration:
break
self.fps.update_fps()
return Gst.PadProbeReturn.OK
[property]
process-mode=1 # Process full frames
network-type=100 # Other
gpu-id=0
net-scale-factor=0.01735207
model-color-format=0
onnx-file=/home/user/classify/models/resnet50.onnx
model-engine-file=/home/user/classify/models/resnet50.onnx_b1_gpu0_fp32.engine
batch-size=1
interval=0
network-mode=0
labelfile-path=/home/user/classify/config/labels.txt
output-blob-names=495
output-tensor-meta=1
classifier-threshold=0.51
gie-unique-id=1
cluster-mode=2
[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5
Looks like i need custom preprocessing in nvinfer plugin and custom parse lib.