Please provide complete information as applicable to your setup.
• Hardware Platform Jetson
• DeepStream Version 6.2
• JetPack Version 5.1
I wanted to run the cityscapes_fan_tiny_hybrid_224.onnx
model from here
And I found some advice that is hard to parse.
In the overview section, it shows how to make the label file. (it tis straightforward and I have attached the file below)
- It has 19 labels (see attached)
segformer_labels.txt (141 Bytes)
secondly it has the key parameter file
# You can either provide the onnx model and key or trt engine obtained by using tao-converter
# onnx-file=../../path/to/.onnx file
model-engine-file=../../path/to/trt_engine
net-scale-factor=0.01735207357279195
offsets=123.675;116.28;103.53
# Since the model input channel is 3, using RGB color format.
model-color-format=0
labelfile-path=./labels.txt
infer-dims=3;1024;1024
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
interval=0
gie-unique-id=1
cluster-mode=2
## 0=Detector, 1=Classifier, 2=Semantic Segmentation, 3=Instance Segmentation, 100=Other
network-type=100
output-tensor-meta=1
num-detected-classes=20
segmentation-output-order=1
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
it has some more information here on the infer config file creation
also there is an example here on the infer config file however I could not find the label file for ds6.2 bu I think that was strighforward list of classes as I have attached.
after reading all the resources I have made the file below
my infer config
[property]
gpu-id=0
net-scale-factor=0.007843
model-color-format=0
offsets=127.5;127.5;127.5
labelfile-path=segformer_labels.txt
model-engine-file=../1/model-segformer-max-input-batch-size-1-no_extra_configs.plan
infer-dims=3;224;224
batch-size=1
network-mode=2
num-detected-classes=19
segmentation-output-order=1
interval=0
gie-unique-id=1
cluster-mode=2
network-type=100
output-tensor-meta=1
[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
#detected-max-w=0
#detected-max-h=0
I then slighly modified/simplified the padprobe to work with a single batch input
def seg_src_pad_buffer_probe(pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
sys.stderr.write("unable to get pgie src pad buffer\n")
return
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list # because our our bach size is 1 we have only one frame
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
frame_number = frame_meta.frame_num
l_user = frame_meta.frame_user_meta_list
while l_user is not None:
try:
# Note that l_user.data needs a cast to pyds.NvDsUserMeta
# The casting is done by pyds.NvDsUserMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
# print("user meta data found")
seg_user_meta = pyds.NvDsUserMeta.cast(l_user.data)
except StopIteration:
print("Error casting user meta data")
break
# print(f"seg_user_meta.base_meta.meta_type: {seg_user_meta.base_meta.meta_type}")
# if seg_user_meta and seg_user_meta.base_meta.meta_type == pyds.NVDSINFER_TENSOR_OUTPUT_META:
if seg_user_meta and seg_user_meta.base_meta.meta_type == pyds.NVDSINFER_SEGMENTATION_META:
try:
# Note that seg_user_meta.user_meta_data needs a cast to
# pyds.NvDsInferSegmentationMeta
# The casting is done by pyds.NvDsInferSegmentationMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
except StopIteration:
break
print("Segmentation meta data found for frame %d" % frame_number)
# Retrieve mask data in the numpy format from segmeta
# Note that pyds.get_segmentation_masks() expects object of
# type NvDsInferSegmentationMeta
masks = pyds.get_segmentation_masks(segmeta)
masks = np.array(masks, copy=True, order='C')
# map the obtained masks to colors of 2 classes.
frame_image = map_mask_as_display_bgr(masks)
cv2.imwrite(folder_name + "/" + str(frame_number) + ".jpg", frame_image)
try:
l_user = l_user.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
then I linked my pipeline e.g.
source > convert > mux > nvinfer > nvsegvisual > display and file sinks (in addition to the folder that will save images in the padprobe function.
for all the above I used the segmentation example as a template
My issue is.
- I cannot see any segmentation output in my output sinks
- on investigation I noticed that the output meta type Im getting is
NVDSINFER_TENSOR_OUTPUT_META
notNVDSINFER_SEGMENTATION_META
I think this may be some config issue
The pipeline is playing and I can see the normal video playbaack and it doe s not show any errors overtly.
Can you please help me set the config correctly or show me where I get this wrong.
Thanks,
Ganindu.