Im trying to get the :python3 deepstream_test_1.py to run with the TLT resnet10_detector.trt

I have modified the :dstest1_pgie_config.txt like this

net-scale-factor=0.0039215697906911373
int8-calib-file=../../../../samples/models/Primary_Detector_Nano_tlt/calibration.bin
labelfile-path=../../../../samples/models/Primary_Detector_Nano_tlt/labels.txt
trt-model-file=../../../../samples/models/Primary_Detector_Nano_tlt/resnet10_detector.etlt
tlt-model-key=ajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6YWM4MjllNmYtZWE5Ny00NzI3LTlmNzItNGY1M2VlZWYxOTFk
input-dims=3;384;1248;0 
uff-input-blob=input_1
batch-size=4
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
is-classifier=0
[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

and like this

net-scale-factor=0.0039215697906911373
model-engine-file=../../models/Primary_Detector_Nano_tlt/resnet10_detector.trt
labelfile-path=../../models/Primary_Detector_Nano_tlt/labels.txt
batch-size=4
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1
gie-unique-id=1
is-classifier=0

but I get this error when it runs

rror: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

any ideas?

Move this topic into TLT forum since it is related to TLT model.

Hi adventuredaisy,
May I know if you mention that you get the same error “NVDSINFER_CONFIG_FAILED” when you run two kinds of dstest1_pgie_config.txt?

Could you please paste your command line along with the full log? Thanks.

Hi adventuredaisy,

We haven’t heard back from you in a couple weeks, so marking this topic closed.
Please open a new forum issue when you are ready and we’ll pick it up there.