Nvinferserver Deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version 6.2

infer_config {
unique_id: 1
gpu_ids: [0]
max_batch_size: 1
backend {}

}
I have 2 questions:

  1. Can I run pytorch model with nvinferserver on CPU because I see an option gpu_ids in config?
  2. Can I run a full deepstream pipeline on CPU?

Thank you

please find gpu_ids in nvinferserver, it represents “Device IDs of GPU to use for pre-processing/inference (single GPU support only)”

what is your whole media pipeline or use case?

  1. can I leave gpu_ids empty to run model in CPU or running with gpus is required?
  2. my pipeline is srcbin → decode → streammux → nvpreprocess → nvinferserver → … All is running is in CPU, can I?

thank you

  1. if empty, it will use 0 at default. nvinferserver is opensource, you can also check the code.
  2. streammux accepts batched buffer and outputs batched buffer. because they are all GPU buffer, you can’t run that pipeline on CPU.

I run triton model using KIND_CPU and I expect to run nvinferserver on CPU but deepstream doesn’t support, right?
because nvinferserver still needs to be run on GPU to normailze, scale

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

yes, nvinfersever will use GPU to do preprocess. and nvstreammux needs GPU to otuput GPU buffer.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.