The .mp4 output file created by the app skips a lot of frames. This results in a video that looks like a slideshow. I followed the instructions in this blog. The trt_pose repo says that Nano should be able to run the model at 12-22 FPS.
Am I missing some steps that’s important for speeding up the inference?
I’ve been using the onnx file given in the repo (pose_estimation.onnx
) because using my own converted weight would cause the app to get stuck on generating the engine file:
Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/models/resnet18_baseline_att_224x224_A_epoch_249.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
An error I got that may be related to this is:
Setting up nvidia-l4t-bootloader (32.4.4-20201027211359) ...
3448-300---1--jetson-nano-qspi-sd-mmcblk0p1
Starting bootloader post-install procedure.
ERROR. Procedure for bootloader update FAILED.
Cannot install package. Exiting...
dpkg: error processing package nvidia-l4t-bootloader (--configure):
installed nvidia-l4t-bootloader package post-installation script subprocess returned error exit status 1
This happens every time I try to install something via apt
.
For reference, I can run deepstream-app samples just fine.
• Hardware Platform (Jetson / GPU)
Jetson Nano Developer Kit 4GB
• DeepStream Version
5.0.1
• JetPack Version (valid for Jetson only)
4.4.1, L4T 32.4.4
• TensorRT Version
7.1.3.0
• Issue Type (questions, new requirements, bugs)
Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Sample app: DeepStream Human Pose Estimation
Config:
[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=pose_estimation.onnx
labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
model-engine-file=pose_estimation.onnx_b1_gpu0_fp16.engine
network-type=100
Command line used:
sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 .