I am trying to run maskrcnn preatrained peoplenet model from nvidia model zoo which can be found here PeopleSegNet | NVIDIA NGC.
I converted it to tensorrt engine with deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.yml at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub config file. I used deepstream sdk to convert it to tensorrt engine and I am trying to use deepstream for inference as well. But I am facing some issues. I am creating the inference context with
NvDsInferContext_Create successfully. But when I am trying to run inference, assert is being hit when deepstream is trying preprocess. Can you help me where am I failing or any steps I can use to debug this ?
nvdsinfer_context_impl.cpp:1615: virtual NvDsInferStatus nvdsinfer::NvDsInferContextImpl::queueInputBatch(NvDsInferContextBatchInput&): Assertion `m_Preprocessor && m_InputConsumedEvent' failed.
Nvidia Driver Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered