Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) Jetson Nano
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
Hi,
I have JetPack4.6 on a Jetson Nano and want to change the model from resnet10.caffemodel to PeopleNet model pruned_quantized_v2.3.2 based on
[deepstream_test_3.py](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/v1.1.1/apps/deepstream -test3)
I did the following steps, but could not find the object.
Please advise.
1.convert resnet34_peoplenet_pruned_int8.etlt in pruned_quantized_v2.3.2 to save.engine
1.1 Download TAO toolkit
wget -O jp46-20210820t231431z-001.zip https://developer.nvidia.com/jp46-20210820t231431z-001zip
mkdir jp46- 20210820t231431z-001
unzip -d jp46-20210820t231431z-001 jp46-20210820t231431z-001.zip
1.2- Download the model
- Download and unzip https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet/versionからpruned_quantized_v2.3.2.zip
1.3 Generate engine file
desktop:~$ . /jp46-20210820t231431z-001/tao-converter-jp46-trt8.0.1.6/tao-converter -k tlt_encode -d 3,544,960 /root/models/peoplenet/peoplenet_ pruned_quantized_v2.3.2/resnet34_peoplenet_pruned_int8.etlt
desktop:~$
[INFO] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 221, GPU 2805 (MiB)
[INFO] [MemUsageSnapshot] Builder begin: CPU 239 MiB, GPU 2836 MiB
[INFO] ---------- Layers Running on DLA ----------
[INFO] ---------- Layers Running on GPU ----------
[INFO] [GpuLayer] conv1/convolution + activation_1/Relu6
[INFO] [GpuLayer] block_1a_conv_1/convolution + block_1a_relu_1/Relu6
[INFO] [GpuLayer] block_1a_conv_shortcut/convolution
[INFO] [GpuLayer] block_1a_conv_2/convolution + add_1/add + block_1a_relu/Relu6
[INFO] [GpuLayer] block_1b_conv_1/convolution + block_1b_relu_1/Relu6
[INFO] [GpuLayer] block_1b_conv_shortcut/convolution
[INFO] [GpuLayer] block_1b_conv_2/convolution + add_2/add + block_1b_relu/Relu6
[INFO] [GpuLayer] block_1c_conv_1/convolution + block_1c_relu_1/Relu6
[INFO] [GpuLayer] block_1c_conv_shortcut/convolution
[INFO] [GpuLayer] block_1c_conv_2/convolution + add_3/add + block_1c_relu/Relu6
[INFO] [GpuLayer] block_2a_conv_1/convolution + block_2a_relu_1/Relu6
[INFO] [GpuLayer] block_2a_conv_shortcut/convolution
[INFO] [GpuLayer] block_2a_conv_2/convolution + add_4/add + block_2a_relu/Relu6
[INFO] [GpuLayer] block_2b_conv_1/convolution + block_2b_relu_1/Relu6
[INFO] [GpuLayer] block_2b_conv_shortcut/convolution
[INFO] [GpuLayer] block_2b_conv_2/convolution + add_5/add + block_2b_relu/Relu6
[INFO] [GpuLayer] block_2c_conv_1/convolution + block_2c_relu_1/Relu6
[INFO] [GpuLayer] block_2c_conv_shortcut/convolution
[INFO] [GpuLayer] block_2c_conv_2/convolution + add_6/add + block_2c_relu/Relu6
[INFO] [GpuLayer] block_2d_conv_1/convolution + block_2d_relu_1/Relu6
[INFO] [GpuLayer] block_2d_conv_shortcut/convolution
[INFO] [GpuLayer] block_2d_conv_2/convolution + add_7/add + block_2d_relu/Relu6
[INFO] [GpuLayer] block_3a_conv_1/convolution + block_3a_relu_1/Relu6
[INFO] [GpuLayer] block_3a_conv_shortcut/convolution
[INFO] [GpuLayer] block_3a_conv_2/convolution + add_8/add + block_3a_relu/Relu6
[INFO] [GpuLayer] block_3b_conv_1/convolution + block_3b_relu_1/Relu6
[INFO] [GpuLayer] block_3b_conv_shortcut/convolution
[INFO] [GpuLayer] block_3b_conv_2/convolution + add_9/add + block_3b_relu/Relu6
[INFO] [GpuLayer] block_3c_conv_1/convolution + block_3c_relu_1/Relu6
[INFO] [GpuLayer] block_3c_conv_shortcut/convolution
[INFO] [GpuLayer] block_3c_conv_2/convolution + add_10/add + block_3c_relu/Relu6
[INFO] [GpuLayer] block_3d_conv_1/convolution + block_3d_relu_1/Relu6
[INFO] [GpuLayer] block_3d_conv_shortcut/convolution
[INFO] [GpuLayer] block_3d_conv_2/convolution + add_11/add + block_3d_relu/Relu6
[INFO] [GpuLayer] block_3e_conv_1/convolution + block_3e_relu_1/Relu6
[INFO] [GpuLayer] block_3e_conv_shortcut/convolution
[INFO] [GpuLayer] block_3e_conv_2/convolution + add_12/add + block_3e_relu/Relu6
[INFO] [GpuLayer] block_3f_conv_1/convolution + block_3f_relu_1/Relu6
[INFO] [GpuLayer] block_3f_conv_shortcut/convolution
[INFO] [GpuLayer] block_3f_conv_2/convolution + add_13/add + block_3f_relu/Relu6
[INFO] [GpuLayer] block_4a_conv_1/convolution + block_4a_relu_1/Relu6
[INFO] [GpuLayer] block_4a_conv_shortcut/convolution
[INFO] [GpuLayer] block_4a_conv_2/convolution + add_14/add + block_4a_relu/Relu6
[INFO] [GpuLayer] block_4b_conv_1/convolution + block_4b_relu_1/Relu6
[INFO] [GpuLayer] block_4b_conv_shortcut/convolution
[INFO] [GpuLayer] block_4b_conv_2/convolution + add_15/add + block_4b_relu/Relu6
[INFO] [GpuLayer] block_4c_conv_1/convolution + block_4c_relu_1/Relu6
[INFO] [GpuLayer] block_4c_conv_shortcut/convolution
[INFO] [GpuLayer] block_4c_conv_2/convolution + add_16/add + block_4c_relu/Relu6
[INFO] [GpuLayer] output_bbox/convolution
[INFO] [GpuLayer] output_cov/convolution
[INFO] [GpuLayer] PWN(output_cov/Sigmoid)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +154, now: CPU 406, GPU 2990 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +241, GPU +241, now: CPU 647, GPU 3231 (MiB)
[WARNING] Detected invalid timing cache, setup a local cache instead
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Total Host Persistent Memory: 78336
[INFO] Total Device Persistent Memory: 26554368
[INFO] Total Scratch Memory: 0
[INFO] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 9 MiB, GPU 1239 MiB
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +4, now: CPU 893, GPU 2886 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +0, GPU +2, now: CPU 893, GPU 2888 (MiB)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 893, GPU 2888 (MiB)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 892, GPU 2888 (MiB)
[INFO] [MemUsageSnapshot] Builder end: CPU 892 MiB, GPU 2888 MiB
-
save “save.engine”, “labels.txt” in “pruned_quantized_v2.3.2” and “resnet34_peoplenet_pruned_int8.txt” in “pruned_quantized_v2.3.2” to /root/models Save it in /root/models 2.
-
change the contents of dstest3_pgie_config.txt to the following:
[property]
gpu-id=0
net-scale-factor=1.0
model-engine-file=/root/models/peoplenet_pruned_quantized_v2.3.2/resnet34_peoplenet_pruned_int8.engine
labelfile-path=/root/models/peoplenet_pruned_quantized_v2.3.2/labels.txt
int8-calib-file=/root/models/peoplenet_pruned_quantized_v2.3.2/resnet34_peoplenet_int8.txt
batch-size=1
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_bbox/Bias;output_cov/Sigmoid
cluster-mode=1
model-color-format=0
[class-attrs-all]
pre-cluster-threshold=0.5
topk=20
nms-iou-threshold=0.5
- Run deepstream_test_3.py to show a person on the connected camera, but no object is detected
- The print on line 101 detects objects, but does not print “Objects=1”.
- I have not modified deepstream_test_3.py for peoplenet, but expect “Objects=” to be “1”
print(“Frame Number=”, frame_number, “Number of Objects=”,num_ rects, “Vehicle_count=”,obj_counter[PGIE_CLASS_ID_VEHICLE], “Person_count=”,obj_counter[PGIE_CLASS_ID_PERSON])
