Preprocessing assertion in deepstream

Assertion is being hit in deepstream image preprocessing, while using maskrcnn peoplenet model

I am trying to run maskrcnn preatrained peoplenet model from nvidia model zoo which can be found here PeopleSegNet | NVIDIA NGC.
I converted it to tensorrt engine with deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.yml at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub config file. I used deepstream sdk to convert it to tensorrt engine and I am trying to use deepstream for inference as well. But I am facing some issues. I am creating the inference context with NvDsInferContext_Create successfully. But when I am trying to run inference, assert is being hit when deepstream is trying preprocess. Can you help me where am I failing or any steps I can use to debug this ?

nvdsinfer_context_impl.cpp:1615: virtual NvDsInferStatus nvdsinfer::NvDsInferContextImpl::queueInputBatch(NvDsInferContextBatchInput&): Assertion `m_Preprocessor && m_InputConsumedEvent' failed.

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

This looks like a Deepstream related issue. We will move this post to the Deepstream forum.

Thanks!

Found my mistake. After reading the documentation, I figured I had to set inputFromPreprocessedTensor parameter to false in NvDsInferContextInitParams. Because I want deepstream nvdsinfer api to preprocess the image before running if for inference.I did set it to 0 and now I got the output.

I still have one question. Preprocessing (deepstream) happens on GPU or CPU ? If it happens on CPU, I would preprocess manually on GPU ad then pass it to the deepstream for inference.

you can set GPU id, please refer to the doc: Gst-nvdspreprocess (Alpha) — DeepStream 6.1.1 Release documentation
you can refer to the deepstream sample deepstream-preprocess-test.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.