TX2 vs AGX for OpenPose Inference

I was wondering if anyone has any experience with performing inference on video data using OpenPose on the Jetson modules. The plan is:

Record video simultaneously with 4 USB3 cameras (640x480 @ minimum 60FPS)
Encode video (h.264, h.265)
Perform inference on each of the videos
Save the OpenPose inference for each video for use/visualization

Would it be beneficial here to use the AGX for the improved computing power, or is the TX2 likely to be able to handle this load?

Hi,

Sorry that we don’t have the same experiment as your use case.
Here are some related profiling result for your reference:

1. Multi-stream: https://developer.nvidia.com/deepstream-sdk

NVIDIA Products 	H.264	H.265
Jetson TX2      	14	14
Jetson AGX Xavier 	32	49

2. OpenPose

NVIDIA Products 	Openpose
Nano            	8.1
TX1             	12.3
TX2             	16.5
Xavier            	30

Based on OpenPose’s results, it’s recommended to use Xavier rather than TX2.
Thanks.

Are these results for the full Openpose architecture, or a scaled down/mobile version?

@cmcgu019, @AastaLLL

I did not manage to get such high fps on jetson nano. I used original openpose from https://github.com/CMU-Perceptual-Computing-Lab/openpose, and installed using cmake.
No other options have been altered.

I run the tutorials in openpose/build/examples/ on a folder of images, setting net-resolution=608x464, and got about 0.5 fps. :(

Please can you provide any step/optimization that you did to get 8 fps on jetson nano?

Hi seeyeetan,

Please open a new topic for your issue in Nano forum: https://devtalk.nvidia.com/default/board/371/jetson-nano/