pgie_unet_tao_config.yml:
property:
gpu-id: 0
net-scale-factor: 0.007843
model-color-format: 1
offsets: 127.5;127.5;127.5
labelfile-path: unet_labels.txt
##Replace following path to your model file
model-engine-file: …/…/models/unet/unet_resnet18.etlt_b1_gpu0_fp16.engine #current DS cannot parse onnx etlt model, so you need to #convert the etlt model to TensoRT engine first use tao-convert
tlt-encoded-model: …/…/models/unet/unet_resnet18.etlt
tlt-model-key: tlt_encode
infer-dims: 3;320;320
batch-size: 1
0=FP32, 1=INT8, 2=FP16 mode
network-mode: 1
num-detected-classes: 3
interval: 0
gie-unique-id: 1
network-type: 2
output-blob-names: softmax_1
segmentation-threshold: 0.0
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order: 1
@pesuyn444 According to Linux OS rules, you need to run “./tao-converter -h” or add the path to the system execution path list. Please do not raise such issue in this forum, this is just DeepStream forum.
And please make sure you are familiar with GStreamer( GStreamer: open source multimedia framework) before you start with DeepStream since you want to do some customization work.
I run example peopleSemSegNetVanilla in deepstream_tao_apps with my video mp4 by command:
./apps/tao_segmentation/ds-tao-segmentation -c configs/peopleSemSegNet_tao/vanilla/pgie_peopleSemSegVanillaUnet_tao_config.txt -i file:///media/anlab/data/DeepStream/Sample/samples/streams/sample_720p.mp4
My file pgie_peopleSemSegVanillaUnet_tao_config.txt:
[primary-gie]
enable=1
gpu-id=0 #Modify as necessary
batch-size=1 #Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1 #Replace the infer primary config file when you need to #use other detection models
#model-engine-file=…/…/models/tao_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine
config-file=config_infer_primary_peopleSegNet.txt
[sink2]
enable=0 #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1 encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000 #H264 Profile - 0=Baseline 2=Main 4=High #H265 Profile - 0=Main 1=Main10
profile=0 set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[tracker]
enable=1 #For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so #ll-config-file required to set different tracker types
#ll-config-file=…/deepstream-app/config_tracker_IOU.yml
ll-config-file=…/deepstream-app/config_tracker_NvDCF_perf.yml
#ll-config-file=…/deepstream-app/config_tracker_NvDCF_accuracy.yml
#ll-config-file=…/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1
[tests]
file-loop=0
File config_infer_primary_peopleSegNet.txt:
[property]
gpu-id=0
net-scale-factor=0.017507
offsets=123.675;116.280;103.53
model-color-format=1
tlt-model-key=nvidia_tlt
tlt-encoded-model=/media/anlab/data/deepstream_tao_apps/models/peopleSegNet/V2/peoplesegnet_resnet50.etlt
model-engine-file=/media/anlab/data/deepstream_tao_apps/models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine
network-type=3 ## 3 is for instance segmentation network
labelfile-path=./peopleSegNet_labels.txt
int8-calib-file=/media/anlab/data/deepstream_tao_apps/models/peopleSegNet/V2/peoplesegnet_resnet50_int8.txt
infer-dims=3;576;960
num-detected-classes=2
uff-input-blob-name=Input
batch-size=1
##0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
interval=0
gie-unique-id=1 #no cluster
##1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
##MRCNN supports only cluster-mode=4; Clustering is done by the model itself
cluster-mode=4
output-instance-mask=1
output-blob-names=generate_detections;mask_fcn_logits/BiasAdd
parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLTV2
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
I want to change model peoplesegnet_resnet50.etlt to model peoplesemsegnet_vanilla_unet_dynamic_etlt_int8_fp16.etlt. So I copy all config in file pgie_peopleSemSegVanillaUnet_tao_config.txt and paste to file config_infer_primary_peopleSegNet.txt. And run file deepstream_app_source1_segmentation.txt. Run successful but not segmentation:
The peoplesegnet is an instance segmentation model ( CTSE-AI_Computing / DeepStream / deepstream_tao_apps · GitLab (nvidia.com)), so you can not use your segmentation model as peoplesegnet model. Just refer to the v4l2 camera source part in the deepstream-app source code and modify your code according to it.
Please make sure you are familiar with GStreamer( GStreamer: open source multimedia framework) before you start with DeepStream since you want to do some customization work.
Means I need to change the input file mp4 in command : ./apps/tao_segmentation/ds-tao-segmentation -c configs/peopleSemSegNet_tao/vanilla/pgie_peopleSemSegVanillaUnet_tao_config.txt -i file:///media/anlab/data/DeepStream/Sample/samples/streams/sample_720p.mp4, change to webcam?
Please read the source code deepstream_tao_apps/deepstream_seg_app.c at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub, this app only support uridecodebin source, you need to change the code to v4l2src source. The v4l2src sample code can refer to the create_camera_source_bin() function in /opt/nvidia/deepstream/deepstream/sources/apps/apps-common/src/deepstream_source_bin.c
DeepStream is a SDK, we have provided lots of sample codes to show how to use DeepStream APIs.
Please make sure you are familiar with GStreamer( GStreamer: open source multimedia framework) before you start with DeepStream since you want to do some customization work.
I copy create_camera_source_bin() function in /opt/nvidia/deepstream/deepstream/sources/apps/apps-common/src/deepstream_source_bin.c and paste in deepstream_tao_apps/apps/tao_segmentation/deepstream_seg_app.c and edit function create_source_bin() but when I run “make” error: