Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)=x86
• DeepStream Version=6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
RROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: gstnvinfer.cpp(2070): gst_nvinfer_process_tensor_input (): /GstPipeline:preprocess-test-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
This error means tensor data cann’t be processd by TRT. There are many possibilities.
Do you use your own models?
Are you using samples or your own program?
Can you share your configuration files and models for me to reproduce this issuse ?
Yes, I used TAO’s fall_floor/ride_bike model, I will use the generated model in the example deepstream 3d action recognition. I changed the label file and used the ONNX model.
Do you change some parameters of models ?
You can try to use DS-6.3.
If no working,can you share your onnx model ? I will try reproduce this issuse.
Yes, I changed the model parameters, but I did not install DS6.3
rgb_resnet18_3.onnx_b4_gpu0_fp16.engine (64.7 MB)
Onnx is neccessary. Because the *.engine
model is converted by tensorrt according devices.
So I think this model can’t run on my computer.
Thanks
This ONNX is greater than 100M, how can I send it to you
What is the size after you compress use the below command line ?
tar zcvf model.tar.gz onnx
Or you can share to google drive ?