Please provide complete information as applicable to your setup.
• Hardware Platform Jetson • DeepStream Version 6.3 • JetPack Version 5.1 • TensorRT Version 5.1 • Issue Type questions
Hi,
I’m building a pipeline for application that uses two neural networks (Yolo and one custom network). Following deepstream_test_3.py example I managed to import my Yolo as well as my custom network as sgie1 (instead of ResNet18), however I am stuck at getting the output and doing post processing.
I’m completely new to Gstreamer and DeepStream and I would like to get some guidance on this issue.
I have couple of questions:
How can I get the output of my second neural network, just to print it out in console? My neural network has multiple outputs
Can I write all the post processing in Python or does it need to be done in C?
Currently nvinfer creates engine from model ONNX model file, but I have already converted model to TensorRT engine manually. Is it possible to configure script to use engine directly?
Do you have example for sending data to Amazon S3?
yes , you can set model-engine-file to the path of engine.
No, currently DeepStream dose not support sending data to Amazon S3, please refer to doc. but you can modify the code to customize, please refer to doc.
When I set only the model-engine-file path I get the following error:
0:00:00.241005377 661085 0x2c30d60 WARN nvinfer gstnvinfer.cpp:887:gst_nvinfer_start: warning: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized.
0:00:03.239955839 661085 0x2c30d60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :/wd_ssd/mcs/weights/pose_net_tensorrt/pose_net_224_224.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 8
0 INPUT kFLOAT input_1 224x224x3
1 OUTPUT kFLOAT z_alpha 6x2
2 OUTPUT kFLOAT y_confidence 6
3 OUTPUT kFLOAT y_alpha 6x2
4 OUTPUT kFLOAT x_confidence 6
5 OUTPUT kFLOAT x_alpha 6x2
6 OUTPUT kFLOAT z_confidence 6
7 OUTPUT kFLOAT dimension 3
0:00:03.420036838 661085 0x2c30d60 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1920> [UID = 2]: Backend has maxBatchSize 1 whereas 16 has been requested
0:00:03.420096198 661085 0x2c30d60 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2097> [UID = 2]: deserialized backend context :/wd_ssd/mcs/weights/pose_net_tensorrt/pose_net_224_224.engine failed to match config params, trying rebuild
0:00:03.436412615 661085 0x2c30d60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 2]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:06.067079527 661085 0x2c30d60 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 2]: build engine file failed
0:00:06.242434564 661085 0x2c30d60 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 2]: build backend context failed
0:00:06.242491205 661085 0x2c30d60 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 2]: generate backend failed, check config file settings
0:00:06.242529093 661085 0x2c30d60 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:06.242544645 661085 0x2c30d60 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Config file path: pose_net_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(887): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: pose_net_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED