Next we run the deepstream_pose_estimation_app inside container using the command
./deepstream-pose-estimation-app ./images/input.gif ./images/
We are using the same input file which was available in the git
When it ran 1st time, it took around 3-4 mins and gave the error But we got the error
We have successfully run the deepstream_pose_estimation_app and it has generated the output in images as Pose_Estimation.mp4 file
But we are not able to play this output file in container. Also we are not able to play the sample_720p.h264 file in container
Can you please let us know how to play these files in container.
Also since pose_estimation needs .h264 files to process, can you please guide us which camera we should use to generate streaming videos of .h264 vidoes.
Also I believe the camera will have an IP address which we can use for pose estimation for streaming videos.
Kindly help with queries
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks
do you mean you can’t play that Pose_Estimation.mp4? can you play by windows’s VLC player? sample_720p.h264’s content is the same with /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4.
deepstream_pose_estimation is only a sample, if want to use mp4 or RTSP, you can modify the code, please refer to \opt\nvidia\deepstream\deepstream-6.2\sources\apps\sample_apps\deepstream-test3\deepstream_test3_app.c, it use nvurisrcbin or uridecodebin to play MP4 or RTSP.