Please provide complete information as applicable to your setup.
• Hardware Platform (GTX 1660)
• DeepStream Version 5.0
• TensorRT Version 188.8.131.52
• NVIDIA GPU Driver Version 440.33.01
• Issue Type: question
I was trying to test out the pose estimation app (GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline.) on a desktop PC running Ubuntu 18.04 but it fails every time because of the h264 parser, it gives the following error
h264-parser: No valid frames found before end of stream
I tried to use videos encoded with an H264 encoder as well as using videos provided by deepstream sdk with it’s samples, all having the same outcome.
The real problem I am facing is not using the app with video streams as a source, but with images instead. My question is, could any model (densene121 or resnet18) be used with images to output a skeleton or they are made to work only with videos? and if it’s possible, how can i change the pipeline to load images instead of videos?
I tried to replace the h264 parser and nvv4l2-decoder with a jpeg one but the outcome was an out of bounds radius error.
Any help is greatly appreciated.