For GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT, please refer to
Pose Estimation on Deepstream
Fiona.Chen
11
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Why does the image buffer before performing inference not match the buffer received via opencv? | 6 | 486 | May 25, 2023 | |
Nvinfer padding | 11 | 1309 | August 30, 2022 | |
Streammux size and nvinfer size | 3 | 10 | March 24, 2025 | |
Configuration in preprocess | 9 | 60 | August 27, 2024 | |
Deepstream NVINFER Model inference not bitmatching with Python TensorRT inference | 3 | 217 | February 5, 2024 | |
Access ROI from nvdspreprocess in python | 15 | 482 | July 2, 2024 | |
DeepStream nvinfer input tensor contains incorrect image | 14 | 1401 | August 8, 2022 | |
How to pass custom input to non image layer of model during runtime | 14 | 103 | December 13, 2024 | |
Passing Transformed Image Frames down the Pipeline | 12 | 1490 | October 12, 2021 | |
Extracting Processed Frames After Inference in Nvidia DeepStream Efficiently | 10 | 72 | February 12, 2025 |