Yes, I can find the NvCaffeParser.h in TensorRT/include.
I followed the instructions from 2) and added the correct path in the Makefile but I still got the same error.
Here’s my Makefile, maybe I missed something.
That’s your head file under your TRT OSS folder only. Not sure when you were missing the head file. It should be available, normally at /usr/include/x86_64-linux-gnu/. How did you install TensorRT7? Can you share your step?
Please modify the Makefile in each child folder too. That will solve your issue.
I get some errors which seems to be caused by TensorRT installation:
Now playing: pgie_frcnn_tlt_config.txt
0:00:02.346853731 2567 0x556a84e9e210 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 3 output network tensors.
0:00:13.238736309 2567 0x556a84e9e210 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /home/rockefella09/deepstream_tlt_apps/models/frcnn/faster_rcnn_resnet10.etlt_b1_gpu0_fp16.engine successfully
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input_image 3x272x480
1 OUTPUT kFLOAT proposal 300x4x1
2 OUTPUT kFLOAT dense_regress_td/BiasAdd 300x16x1x1
3 OUTPUT kFLOAT dense_class_td/Softmax 300x5x1x1
0:00:13.246974990 2567 0x556a84e9e210 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:pgie_frcnn_tlt_config.txt sucessfully
Running…
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
0:00:15.283189527 2567 0x556a8493e940 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:15.283214496 2567 0x556a8493e940 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(1946): gst_nvinfer_output_loop (): /GstPipeline:ds-custom-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Deleting pipeline
Now it works (I have no error anymore and it seams like it’s running). I used the “-d” because I wanted to display the results, but without success (it hangs in Running…). I am using a cloud compute engine from GCP and I am connected to the display via remotedesktop hoping that I can see the predictions on the video but as I mentions it hangs.
If I remove the “-d” flag, it seems like it’s working till the end but I can’t figure where I can find the resulted video.