Jetson nano 4GB
DeepStream Version 5
JetPack Version 4.5.1
TensorRT Version 7.1.3
I have been trying to get the pose estimation example (Link) to run on a jetson nano 4gb, but when starting it up i never get further than the ‘killed’ message from the log below. I’ve tried this with and without the engine file in the configuration file, but no difference, ‘killed’ at the same point during the flow.
sudo ./deepstream-pose-estimation-app WIN_20210302_00_32_30_Pro.mp4 test
Now playing: WIN_20210302_00_32_30_Pro.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:07.053459871 25167 0x55b2212b00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:07.053565655 25167 0x55b2212b00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:07.053599041 25167 0x55b2212b00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
Killed
This is the content of the config file (with engine file disabled here):
[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
##onnx-file=resnet18_baseline_att_224x224_A_epoch_249.onnx
onnx-file=pose_estimation.onnx
labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
## model-engine-file=pose_estimation.onnx_b1_gpu0_fp16.engine
network-type=100
workspace-size=3000
I’ve seen other posts where people share their .onnx file, but i tried those and the system complained about those being for int64 and they didn’t work for me. any help would be appreciated.