Deepstream-bodypose2d-app can run in jetson xavier?

deepstream-bodypose2d-app can run in jetson xavier?

The sample has been tested on Jetson before release.

where is the bodypose2d-app model?

./deepstream-bodypose2d-app 2 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 file:///usr/data/bodypose2d_test.png ./body2dout
Request sink_0 pad from streammux
joint Edges 1 , 8
joint Edges 8 , 9
joint Edges 9 , 10
joint Edges 1 , 11
joint Edges 11 , 12
joint Edges 12 , 13
joint Edges 1 , 2
joint Edges 2 , 3
joint Edges 3 , 4
joint Edges 2 , 16
joint Edges 1 , 5
joint Edges 5 , 6
joint Edges 6 , 7
joint Edges 5 , 17
joint Edges 1 , 0
joint Edges 0 , 14
joint Edges 0 , 15
joint Edges 14 , 16
joint Edges 15 , 17
connections 0 , 1
connections 1 , 2
connections 1 , 5
connections 2 , 3
connections 3 , 4
connections 5 , 6
connections 6 , 7
connections 2 , 8
connections 8 , 9
connections 9 , 10
connections 5 , 11
connections 11 , 12
connections 12 , 13
connections 0 , 14
connections 14 , 16
connections 8 , 11
connections 15 , 17
connections 0 , 15
Now playing: file:///usr/data/bodypose2d_test.png
WARNING: Deserialize engine failed because file path: /home/y/taodeepstreampose2d/deepstream_tao_apps/configs/bodypose2d_tao/../../models/bodypose2d/model.etlt_b32_gpu0_fp16.engine open error
0:00:03.747287150  4573 0xaaab1446f2c0 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/y/taodeepstreampose2d/deepstream_tao_apps/configs/bodypose2d_tao/../../models/bodypose2d/model.etlt_b32_gpu0_fp16.engine failed
0:00:03.879320475  4573 0xaaab1446f2c0 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/y/taodeepstreampose2d/deepstream_tao_apps/configs/bodypose2d_tao/../../models/bodypose2d/model.etlt_b32_gpu0_fp16.engine failed, try rebuild
0:00:03.879369722  4573 0xaaab1446f2c0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/y/taodeepstreampose2d/deepstream_tao_apps/configs/bodypose2d_tao/../../models/bodypose2d/model.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.972108973  4573 0xaaab1446f2c0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating


There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

The README has told you where to get the model. deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

And there are details steps of download/build/run deepstream_tao_apps/apps/tao_others at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

Please read the document.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.