Add a new model to the pipeline

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type(questions, new requirements, bugs) questions

Hello,

I am trying to use Deepstream 6.3 to do a pipeline which is detector → age/gender classification (which is a custom model).
For the detector, I am using PeopleNet which is provided in Deepstream-test3.

I am using triton server for inference. Would you please provide detailed steps to do that?

I wrote a config file for inference and another for the triton model. I converted the model to TensorRT, so now I have “.engine” file for it. I also modified the deepstream_test_3.py script to append the classifier and save the output file as mp4 file. PS: I am willing to use multi-stream.

But, when I run this command;

python3 deepstream_test_3_age_gender.py -i file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264 --pgie nvinferserver -c /opt/nvidia/deepstream/deepstream-6.3/samples/triton_model_repo/age_gender/config_triton_infer_primary_agegender.txt

{'input': ['file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264'], 'configfile': '/opt/nvidia/deepstream/deepstream-6.3/samples/triton_model_repo/age_gender/config_triton_infer_primary_agegender.txt', 'pgie': 'nvinferserver', 'no_display': False, 'file_loop': False, 'disable_probe': False, 'silent': False}
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating Sgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating Code Parser 

Creating Container 

Creating Sink 

WARNING: Overriding infer-config batch-size 0  with number of sources  1  

Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
0 :  file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
Starting pipeline 

WARNING: infer_proto_utils.cpp:144 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
E1008 12:26:24.631410 139 logging.cc:43] 1: [stdArchiveReader.cpp::StdArchiveReader::32] Error Code 1: Serialization (Serialization assertion magicTagRead == kMAGIC_TAG failed.Magic tag does not match)
E1008 12:26:24.640140 139 logging.cc:43] 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
E1008 12:26:24.644622 139 model_lifecycle.cc:597] failed to load 'age_gender' version 1: Internal: unable to create TensorRT engine
ERROR: infer_trtis_server.cpp:1057 Triton: failed to load model age_gender, triton_err_str:Invalid argument, err_msg:load failed for model 'age_gender': version 1 is at UNAVAILABLE state: Internal: unable to create TensorRT engine;

ERROR: infer_trtis_backend.cpp:54 failed to load model: age_gender, nvinfer error:NVDSINFER_TRITON_ERROR
ERROR: infer_trtis_backend.cpp:193 failed to initialize backend while ensuring model:age_gender ready, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.461652876   139      0x1e384c0 ERROR          nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger:<secondary-inference> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:289> [UID = 1]: failed to initialize triton backend for model:age_gender, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.463003791   139      0x1e384c0 ERROR          nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger:<secondary-inference> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:79> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.463065192   139      0x1e384c0 WARN           nvinferserver gstnvinferserver_impl.cpp:592:start:<secondary-inference> error: Failed to initialize InferTrtIsContext
0:00:01.463098823   139      0x1e384c0 WARN           nvinferserver gstnvinferserver_impl.cpp:592:start:<secondary-inference> error: Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/triton_model_repo/age_gender/config_triton_infer_primary_agegender.txt
0:00:01.463207794   139      0x1e384c0 WARN           nvinferserver gstnvinferserver.cpp:518:gst_nvinfer_server_start:<secondary-inference> error: gstnvinferserver_impl start failed
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(592): start (): /GstPipeline:pipeline0/GstNvInferServer:secondary-inference:
Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/triton_model_repo/age_gender/config_triton_infer_primary_agegender.txt
Exiting app

I don’t know how to write pre/post processors inside config_triton_infer_primary_agegender.txt for my model.

I am not sure if I am moving in the correct direct. Some support would be helpful.

Thanks.

please do the steps according to test3 readme, the main steps are downloading model and generating TRT engines for triton.

from the error, you need to generate TRT engine for your age_gender if using triton inference. please refer to script

please refer to pre/post parameters explanation and classification model cfg sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie1_nvinferserver_config.txt.

1 Like

Thanks

Is this still an DeepStream issue to support? Thanks!

No, I think we are good.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.