yes, I have ran tests to confirm it is not an nvinfer issue, running the app without nvpreprocess in the pipeline and testing the application with a different nvinfer.
The issue is present with the addition of nvpreprocess in the pipeline. I checked the GPU usage to confirm that pipeline is not created when Gst state is set to playing.
thanks for the update, deepstream sampe /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test runs fine, please compare your preprocess_config.conf with the config_preprocess.txt in deepstream-preprocess-test.
I could not find an issue with the preprocess config file when compared with the one in sample application.
I have confirmed that the above error message comes in the console out when Gst state is set to playing, I suppose something goes wrong in the pipeline before the nvinfer gets loaded into the GPU memory. Can we debug why this error gets raised.
As the code shown, I replace config_preprocess.txt with the preprocess_config.conf, besides, there is no any modification. After testing, there is configuration parsing error, please compare the configuration again, here is the error log: log-20230727.txt (2.7 KB)
can you help to reproduce that “std::invalid_argument” issue based on this deepstream sample?
As you asked I have replaced the config_preprocess.txt with the preprocess_config.conf, and tested sample . I am getting the same error.
There are no changes to the PGIE config.
I am attaching the modified preprocess config (do not mind the file name change) preprocess_test.conf (3.5 KB)
this is the console out when running the sample
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating H264 Encoder
Creating H264 rtppay
Adding elements to Pipeline
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
Starting pipeline
0:00:00.150994145 12876 0x25f1d30 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.684605940 12876 0x25f1d30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:00.684661058 12876 0x25f1d30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.685014752 12876 0x25f1d30 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
terminate called after throwing an instance of 'std::invalid_argument'
what(): stof
Aborted (core dumped)
I have found out that not adding [user-configs] pixel-normalization-factor=0.003921568 in the preprocess config raises the above exception.
my application is running now, but I am getting detections on the full frame rather than on the roi and I am getting the following in my console out Using offsets : 103.939003,116.778999,123.680000 Error: gst-resource-error-quark: Failed to set tensor buffer pool to active (1): gstnvdspreprocess.cpp(640): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:preprocess-plugin ^CTraceback (most recent call last):
When I ran the application again (no changes) I am not getting this message in the console, this is the console out log20230729.txt (2.2 KB)
This is the whole console out when I got the above message log20230728.txt (2.0 KB)
why are there two different logs?
from the logs, you are testing own model. please correct the preprocess_test.conf, tensor-name=input_1 should be tensor-name=Input.
please check why is there “Unknown or legacy key specified ‘input-tensor-meta’ for group [property]” error.
Apologies for the late reply, I have uploaded the log for my application, hence my model. “Unknown or legacy key specified ‘input-tensor-meta’ for group [property]” This is because of the input-tensor-meta=1 set in the PGIE config. Following is a debug level log log20230804.txt (5.3 KB)
from the new log, there is no any error. can the application run now?
if still failed, could you share more logs? please do “export GST_DEBUG=6” first to modify Gstreamer’s log level, then run again, you can redirect the logs to a file.
Here is the log, out.log (6.0 MB). The application is running now, but the model is running inference outside the roi we have confgured for the source in the preprocess config.
if only using one video, please set as follows:
…
[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-0=0;10;10;10;10;0;10;10;0;0;10;10;
…
is there ROI green rect on the video? do you mean there is bbox outside the ROIs?
nvpreprocess is opensource. you can add log in gst_nvdspreprocess_on_frame of \opt\nvidia\deepstream\deepstream-6.2\sources\gst-plugins\gst-nvdspreprocess\gstnvdspreprocess.cpp to check rect’s with and height for inference.
bboxes outside ROI, we are not drawing the ROI on the video frame. So Iam running this experiment on deepstream6.0, is the suggestion to add log relevant.