Hi, I copy/paste this question from the TAO subforums since they send me here:
• Hardware : A100
• Network Type: Detectnet_v2
• DeepStream Version: Latests (Docker)
• RensorRT Version: 8.4.1
• NVIDIA GPU Driver Version: Driver Version: 515.65.01 CUDA Version: 11.7
Please provide the following information when requesting support.
• Training spec file, Deepstream conf and TAO results:
• Deepstream: Nvidia Deepstream Docker
everything.zip (48.9 MB)
• How to reproduce the issue ?: I am working with the deepstream-multistream python apps and cannot use a new model training with a custom dataset in TAO4.0. The model fails after this code:
python3.8 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame
This is the result:
Frames will be saved in frame
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : file:///share_data_deepstream/tao/WH_TAOtest.h264
Starting pipeline
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 213, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine
0:00:01.974624358 1010 0x208bd30 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed
0:00:02.069613931 1010 0x208bd30 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed, try rebuild
0:00:02.070005395 1010 0x208bd30 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:130 Cannot access prototxt file '/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/tao_model/resnet18_detector.prototxt'
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:03.017040852 1010 0x208bd30 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:03.111856367 1010 0x208bd30 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:03.111910479 1010 0x208bd30 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:03.111932240 1010 0x208bd30 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:03.111940255 1010 0x208bd30 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
The baseline testing works fine after using “fakesink” but every time I try to change the model for a new one the error pop up.
Any help is welcome
yuweiw
December 30, 2022, 2:16am
3
From the situation you described, it’s not a loading model error. It seems like you haven’t configured the internal display environment of Docker. You can refer the link below:
Unable to start the composer in deepstream development docker?
The problem is that I don’t have display interface, is an ssh to a server so I don’t have any visual interface.
yuweiw
January 3, 2023, 2:31am
5
If you don’t have a display interface, we suggest you save the result to a file or use rtsp sink to play the video on another device. The nveglglessink plugin is used in the source code and it needs display env. So if you run it directly, it will report error.
I am kinda confused now, why this is a display error if the .py works perfectly fine after changing to fakesink like this:
sink = Gst.ElementFactory.make("fakesink", "fakesink")
The only real error happens after I try to use a new .etlt model training in the TAO 4.0 docker. Do I still need to configure the deepstream display docker internally? Even if the original deepstream_imagedata-multistream.py work perfectly fine?
yuweiw
January 4, 2023, 1:14pm
7
How did you generate the resnet18_detector.trt.int8.engine
? From the log you attached, it’s an error of building engine. Theoretically, even if you switch to fakesink, there will still be mistakes.
Could you run it with GST_DEBUG=3
and attach the log generated?
I did it using the TAO 4.0 toolkit jupyters and I just change the extension since the conf file that I am using as an example use .engine
# Need to pass the actual image directory instead of data root for tao-deploy to locate images for calibration
!sed -i "s|/workspace/tao-experiments/data/training|/workspace/tao-experiments/data/training/image_2|g" $LOCAL_SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt
# Convert to TensorRT engine (INT8)
!tao-deploy detectnet_v2 gen_trt_engine \
-m $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt \
-k $KEY \
--data_type int8 \
--batches 20 \
--batch_size 16 \
--max_batch_size 16\
--engine_file $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.trt.int8.engine \
--cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin \
-e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
--verbose
# Convert back the spec file
!sed -i "s|/workspace/tao-experiments/data/training/image_2|/workspace/tao-experiments/data/training|g" $LOCAL_SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt
yuweiw
January 7, 2023, 3:04am
9
Did you put the model and config file int the path described by the configuration file? From the log, it cannot find the file from the path. Or you can try to use the sudo
command.
Everything is there and sudo command return the same error, I move the models to the original path of the file and didnt work. I wonder if this problem is because the Ampere A100 dont have encode support according to this post
yuweiw
January 11, 2023, 1:12am
11
Yes, the Ampere A100 don’t have hardware encode support. But the error from your log is from nvinfer. You can try to chang the encode to software and try it first.
How can I do that? Also, I realise something, I do not have the prototxt since TAO didn’t create it, how can I make it with TAO? Or I need another element to make it?
yuweiw
January 12, 2023, 2:41am
13
1.deepstream_imagedata-multistream.py doesn’t use encoder, so you don’t have to worry about the encoder problem.
2.You need to reconfirm whether it can be run well with fakesink. If it is run well with fakesink, there should be no problem with the model.
3. Could you run it with GST_DEBUG=3
and attach the log generated?
GST_DEBUG=3 python3.8 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame
This is the log generate by GST_DEBUG=3
Otest.h264 frame
Frames will be saved in frame
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : file:///share_data_deepstream/tao/WH_TAOtest.h264
Starting pipeline
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 213, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine
0:00:02.239533096 1536 0x3b1ea70 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed
0:00:02.332333380 1536 0x3b1ea70 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed, try rebuild
0:00:02.332787012 1536 0x3b1ea70 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:130 Cannot access prototxt file '/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/tao_model/resnet18_detector.prototxt'
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:03.279153086 1536 0x3b1ea70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:03.372307926 1536 0x3b1ea70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:03.372367257 1536 0x3b1ea70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:03.372409026 1536 0x3b1ea70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:03.372426088 1536 0x3b1ea70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:03.372477875 1536 0x3b1ea70 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:<primary-inference:sink> Failed to activate pad
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
yuweiw
January 13, 2023, 1:48am
15
1.Please make sure you run the command with sudo
.
sudo GST_DEBUG=3 python3 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame
2.You can run our demo by referring the README first to make sure that your environment is OK. It’s a resnet model too.
/opt/nvidia/deepstream/deepstream/sources\apps\sample_apps\deepstream-test1\
Sorry about that, here are the results using the same .h264 file:
sudo GST_DEBUG=3 python3.8 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame
WH_TAOtest.h264 frame
Frames will be saved in frame
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : file:///share_data_deepstream/tao/WH_TAOtest.h264
Starting pipeline
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 213, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine
0:00:02.302206549 1554 0x2c0fe70 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed
0:00:02.400428793 1554 0x2c0fe70 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed, try rebuild
0:00:02.400778037 1554 0x2c0fe70 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:130 Cannot access prototxt file '/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/tao_model/resnet18_detector.prototxt'
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:03.316092175 1554 0x2c0fe70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:03.411608674 1554 0x2c0fe70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:03.411648709 1554 0x2c0fe70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:03.411667574 1554 0x2c0fe70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:03.411679767 1554 0x2c0fe70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:03.411714632 1554 0x2c0fe70 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:<primary-inference:sink> Failed to activate pad
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
This one is the deepstream_test_1.py without changing anything:
sudo GST_DEBUG=3 python3 deepstream_test_1.py /share_data_deepstream/tao/WH_TAOtest.h264
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file /share_data_deepstream/tao/WH_TAOtest.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
0:00:00.931492286 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931517272 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:00.931530197 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931539865 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:00.931554693 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931562728 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat AV10
0:00:00.931574871 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931582595 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat AV10
0:00:00.931594748 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931620556 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:00.931624584 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931632388 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:00.931642698 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931650342 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:00.931660862 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931668215 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:00.931679587 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931686670 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:00.931689726 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931698272 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:00.931708391 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931721666 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:00.931728989 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931732145 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:00.931744699 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931752253 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat H265
0:00:00.931759547 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931767712 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat H265
0:00:00.931777891 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931781859 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat VP90
0:00:00.931785074 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931795394 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat VP90
0:00:00.931805062 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931808288 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat VP80
0:00:00.931812075 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931816012 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat VP80
0:00:00.931826723 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931834146 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe minimum capture size for pixelformat H264
0:00:00.931837723 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:sink> Unable to try format: Unknown error -1
0:00:00.931845358 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:sink> Could not probe maximum capture size for pixelformat H264
0:00:00.932126054 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:src> Unable to try format: Unknown error -1
0:00:00.932133919 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:src> Could not probe minimum capture size for pixelformat NM12
0:00:00.932137305 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder:src> Unable to try format: Unknown error -1
0:00:00.932141322 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder:src> Could not probe maximum capture size for pixelformat NM12
0:00:00.932146923 1658 0x3ec92c0 WARN v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x2d63670 Failed to determine interlace mode
0:00:00.933097875 1658 0x3ec92c0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:02.242788203 1658 0x3ec92c0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:02.333858712 1658 0x3ec92c0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.335822753 1658 0x3ec92c0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
0:00:02.336751354 1658 0x3ec92c0 WARN basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<file-source> pad not activated yet
0:00:02.444684176 1658 0x35e0640 ERROR egladaption ext/eglgles/gstegladaptation.c:669:gst_egl_adaptation_choose_config:<nvvideo-renderer> Could not find matching framebuffer config
0:00:02.444708081 1658 0x35e0640 ERROR egladaption ext/eglgles/gstegladaptation.c:683:gst_egl_adaptation_choose_config:<nvvideo-renderer> Couldn't choose an usable config
0:00:02.444712459 1658 0x35e0640 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2802:gst_eglglessink_configure_caps:<nvvideo-renderer> Couldn't choose EGL config
0:00:02.444715865 1658 0x35e0640 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2862:gst_eglglessink_configure_caps:<nvvideo-renderer> Configuring caps failed
0:00:02.444734977 1658 0x35e08c0 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.444758150 1658 0x35e08c0 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.444766365 1658 0x35e08c0 WARN GST_PADS gstpad.c:4231:gst_pad_peer_query:<onscreendisplay:src> could not send sticky events
0:00:02.445113606 1658 0x35e08c0 WARN v4l2videodec gstv4l2videodec.c:1847:gst_v4l2_video_dec_decide_allocation:<nvv4l2-decoder> Duration invalid, not setting latency
0:00:02.445136559 1658 0x35e08c0 WARN v4l2bufferpool gstv4l2bufferpool.c:1082:gst_v4l2_buffer_pool_start:<nvv4l2-decoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:02.445523970 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.445651334 1658 0x2d1f4360 WARN v4l2bufferpool gstv4l2bufferpool.c:1533:gst_v4l2_buffer_pool_dqbuf:<nvv4l2-decoder:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:02.445765168 1658 0x2d1f4360 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.445851139 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.445869553 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.445878670 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.446072944 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.446082272 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.446091218 1658 0x2d1f4300 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
Frame Number=0 Number of Objects=1 Vehicle_count=0 Person_count=1
0:00:02.454968334 1658 0x35e06a0 ERROR nveglglessink ext/eglgles/gsteglglessink.c:2907:gst_eglglessink_setcaps:<nvvideo-renderer> Failed to configure caps
0:00:02.455004271 1658 0x35e06a0 WARN nvinfer gstnvinfer.cpp:2300:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:02.455014600 1658 0x35e06a0 WARN nvinfer gstnvinfer.cpp:2300:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2300): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=1 Vehicle_count=0 Person_count=1
0:00:02.456027233 1658 0x35e08c0 WARN baseparse gstbaseparse.c:3666:gst_base_parse_loop:<h264-parser> error: Internal data stream error.
0:00:02.456042452 1658 0x35e08c0 WARN baseparse gstbaseparse.c:3666:gst_base_parse_loop:<h264-parser> error: streaming stopped, reason not-negotiated (-4)
And this is the .log after running the deepstream_test_1.py with fakesink:
out.log (7.2 MB)
yuweiw
January 16, 2023, 2:17am
17
From the log of deepstream_test_1.py
, the tensorRT environment is OK. The reason for the error is that there is a problem with your display environment.
From your project log, there may be a problem with your owm model. You can try the following methods to check:
1.You should make sure there are correct files in the corresponding path of your configuration file. Please notice that it’s a relative path.
model-file=../../../../samples/models/tao_model/resnet18_detector.etlt
proto-file=../../../../samples/models/tao_model/resnet18_detector.prototxt
model-engine-file=../../../../samples/models/tao_model/resnet18_detector.trt.int8.engine
labelfile-path=../../../../samples/models/tao_model/labels.txt
int8-calib-file=../../../../samples/models/tao_model/calibration.bin
2.You can try to commet out the line below to run the cli:
model-file=../../../../samples/models/tao_model/resnet18_detector.etlt
proto-file=../../../../samples/models/tao_model/resnet18_detector.prototxt
#model-engine-file=../../../../samples/models/tao_model/resnet18_detector.trt.int8.engine
labelfile-path=../../../../samples/models/tao_model/labels.txt
int8-calib-file=../../../../samples/models/tao_model/calibration.bin
3.You can try to use the model we provided to run your demo with the fakesink:
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
This is the result:
1.You should make sure there are correct files in the corresponding path of your configuration file. Please notice that it’s a relative path:
root@vm-ai-shared-instance-001:/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model# ls -lah
total 55M
drwxr-xr-x 2 root root 4.0K Jan 11 19:37 .
drwxr-xr-x 1 root root 4.0K Jan 11 19:33 ..
-rw-r--r-- 1 root root 4.1K Jan 11 19:34 calibration.bin
-rw-r--r-- 1 root root 44 Jan 11 19:34 labels.txt
-rw-r--r-- 1 root root 312 Jan 11 19:34 nvinfer_config.txt
-rw-r--r-- 1 root root 43M Jan 11 19:35 resnet18_detector.etlt
-rw-r--r-- 1 root root 12M Jan 11 19:36 resnet18_detector.trt.int8.engine
root@vm-ai-shared-instance-001:/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model# pwd
/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model
root@vm-ai-shared-instance-001:/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model#
I even try to replace the full path and the same error:
sudo GST_DEBUG=3 python3.8 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame
Frames will be saved in frame
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : file:///share_data_deepstream/tao/WH_TAOtest.h264
Starting pipeline
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 213, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine
0:00:02.361779767 1765 0x3e7be70 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed
0:00:02.460520666 1765 0x3e7be70 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.trt.int8.engine failed, try rebuild
0:00:02.460773330 1765 0x3e7be70 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:130 Cannot access prototxt file '/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.prototxt'
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:03.441291454 1765 0x3e7be70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:03.542571515 1765 0x3e7be70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:03.542629764 1765 0x3e7be70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:03.542659460 1765 0x3e7be70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:03.542670831 1765 0x3e7be70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:03.542708111 1765 0x3e7be70 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:<primary-inference:sink> Failed to activate pad
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
2.You can try to commet out the line below to run the cli:
sudo GST_DEBUG=3 python3.8 deepstream_imagedata-multistream.py file:///share_data_deepstream/tao/WH_TAOtest.h264 frame
Frames will be saved in frame
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing...
1 : file:///share_data_deepstream/tao/WH_TAOtest.h264
Starting pipeline
0:00:01.661185229 1775 0x3d46c70 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:130 Cannot access prototxt file '/opt/nvidia/deepstream/deepstream-6.1/samples/models/tao_model/resnet18_detector.prototxt'
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:03.044372415 1775 0x3d46c70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:03.139302305 1775 0x3d46c70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:03.139866463 1775 0x3d46c70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:03.139930453 1775 0x3d46c70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:03.139941584 1775 0x3d46c70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:03.139976279 1775 0x3d46c70 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:<primary-inference:sink> Failed to activate pad
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
3.You can try to use the model we provided to run your demo with the fakesink:
This is the output log:
out.log (7.6 MB)
This is the configuration I use for the first 2 tests:
deepstream_imagedata-multistream.py (16.8 KB)
dstest_imagedata_config.txt (3.6 KB)
For the last test, I used the original configuration files with the only change being the replacement of the sink with a fakesink.
yuweiw
January 17, 2023, 1:41am
19
Are you sure the resnet18_detector.prototxtfile is in tao_model path?
Hi, as I say before, I don’t have it since TAO didn’t generate it, there is a way to generate it outside TAO? Or do I miss something?
Morganh
January 18, 2023, 2:45am
22
@ganmobar
For running inference with TAO models in deepstream, official github is GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream .
Please use it instead.
More, TAO did not generate any prototxt. To config in deepstream, seems that you are running with detectnet_v2 network, so you leverage facenet’s config. deepstream_tao_apps/config_infer_primary_facenet.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub . Facenet is based on detectnet_v2 network.