Gaze Estimation App on deepstream 6.0: Engine File Error

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only)**12.4
• Issue Type( questions, new requirements, bugs)

im working on wsl docker container and
im testing the gaze app and followed these instructions here:

here are the command which im running to start the pipeline:

cd apps/tao_others/deepstream-gaze-app/gazeinfer_impl
make
cd ../
make
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs
./deepstream-gaze-app 1 ../../../configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt file:/

while running this above command im having this error:
root@EISPROD1:~/deepstream/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app# ./deepstream-gaze-app 1 …/…/…/configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt file:///usr/data/faciallandmarks_test.jpg ./gazenet
Request sink_0 pad from streammux
Now playing: file:///usr/data/faciallandmarks_test.jpg
Inside Custom Lib : Setting Prop Key=config-file Value=…/…/…/configs/nvinfer/gaze_tao/sample_gazenet_model_config.txt
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1494 Deserialize engine failed because file path: /root/deepstream/deepstream_tao_apps/configs/nvinfer/facial_tao/…/…/…/models/faciallandmark/faciallandmark.etlt_b32_gpu0_int8.engine open error
0:00:07.327640041 9634 0x556ad4f75d30 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 2]: deserialize engine from file :/root/deepstream/deepstream_tao_apps/configs/nvinfer/facial_tao/…/…/…/models/faciallandmark/faciallandmark.etlt_b32_gpu0_int8.engine failed
0:00:07.527284997 9634 0x556ad4f75d30 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 2]: deserialize backend context from engine from file :/root/deepstream/deepstream_tao_apps/configs/nvinfer/facial_tao/…/…/…/models/faciallandmark/faciallandmark.etlt_b32_gpu0_int8.engine failed, try rebuild
0:00:07.527313607 9634 0x556ad4f75d30 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 2]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:664 INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /root/deepstream/deepstream_tao_apps/configs/nvinfer/facial_tao/…/…/…/models/faciallandmark/faciallandmark.etlt
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:733 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 Failed to get cuda engine from custom library API
0:00:14.370897309 9634 0x556ad4f75d30 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 2]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

kindly help me out here where the issue is arising, as im not doing any custom changes and just wanted to first check the pipeline first, but its not running on default setting as well, i would really appreciate the help if you can make it quick.

Regards,

Update: i have downloaded the etlt model files of facenet and facial land marks estimation from NGC catalogue with their respective int8 calibration files, and have placed in the models directory and also updat the config files as well, but still im seeing the same error here:

““NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream_tao_apps/configs/nvinfer/facial_tao/…/…/…/models/faciallandmark/Facial_Landmarks_model.etlt””

here you can see the models are present in the directory:
““root@EISPROD1:/opt/nvidia/deepstream/deepstream_tao_apps/models/facenet# ls
Facenet_model.etlt config.pbtxt int8_calibration.txt””

““root@EISPROD1:/opt/nvidia/deepstream/deepstream_tao_apps/models/faciallandmark# ls
Facial_Landmarks_model.etlt config.pbtxt int8_calibration.txt””

and i have also recheck the location and naming of models in configs files, but still this error is appearing, kindly im waiting for your feedback.

If you are using DeepStream 6.0, please install dependencies by following the dgpu-setup-for-ubuntu. Then DeepStream only works if the correct driver and tensorrt version are compatible.

the model loads with the absolute paths given, well now everything seems accurate, now getting this error here:

""root@EISPROD1:/opt/nvidia/deepstream/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app# ./deepstream-gaze-app 1 …/…/…/configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/notebooks/image.jpg ./gazenet
Request sink_0 pad from streammux
Now playing: /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/notebooks/image.jpg
Inside Custom Lib : Setting Prop Key=config-file Value=…/…/…/configs/nvinfer/gaze_tao/sample_gazenet_model_config.txt
0:00:07.281867271 11350 0x561c35e2c070 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream_tao_apps/models/faciallandmark/Facial_Landmarks_model.etlt_b32_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:612 [FullDims Engine Info]: layers num: 4
0 INPUT kFLOAT input_face_images 1x80x80 min: 1x1x80x80 opt: 32x1x80x80 Max: 32x1x80x80
1 OUTPUT kFLOAT conv_keypoints_m80 80x80x80 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT softargmax 80x2 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT softargmax:1 80 min: 0 opt: 0 Max: 0

0:00:07.480519512 11350 0x561c35e2c070 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream_tao_apps/models/faciallandmark/Facial_Landmarks_model.etlt_b32_gpu0_int8.engine
0:00:07.661147075 11350 0x561c35e2c070 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 2]: Load new model:…/…/…/configs/nvinfer/facial_tao/faciallandmark_sgie_config.txt sucessfully
0:00:07.661688644 11350 0x561c35e2c070 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1244> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:14.781384476 11350 0x561c35e2c070 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream_tao_apps/models/facenet/Facenet_model.etlt_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x416x736
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46
2 OUTPUT kFLOAT output_cov/Sigmoid 1x26x46

0:00:15.024263424 11350 0x561c35e2c070 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream_tao_apps/models/facenet/Facenet_model.etlt_b1_gpu0_int8.engine
0:00:15.027662463 11350 0x561c35e2c070 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:…/…/…/configs/nvinfer/facial_tao/config_infer_primary_facenet.txt sucessfully
nvbufsurface: memory type (-280382425) not supported
Segmentation fault (core dumped)“”

and one more thing, im not sure what kind of output will i be seeing, so can you also guide me like, what kind of output would be given to me and how can i be able to save the output in my directory?

Quick Update:

i have solved the problem by doing these things:

  1. First i downloaded the three models files which are being used for this Gaze App
  2. Then i used there etlt model files and int calibration files and gave the absolute path of these to the configuration files of txt and yml files.
  3. then these files have eventually created the .engine files
  4. finally i have replaced the input file (image or video) to my own input with the absolute path and it is working fine.
  5. now im working on save the final output file to get the idea how the model is working.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.