Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Xavier nx
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6.1
• TensorRT Version 8.2.1-1+cuda10.2
• Issue Type( questions, new requirements, bugs) Questions
im trying to run deepstream_launchpad.ipynb from
im getting file not found errors
Creating Pipeline
Creating streammux
Creating source_bin 0
Creating source bin
source-bin-00
Creating source_bin 1
Creating source bin
source-bin-01
Creating source_bin 2
Creating source bin
source-bin-02
Creating source_bin 3
Creating source bin
source-bin-03
Creating nvinfer (PGIE)
WARNING: Overriding infer-config batch-size 30 with number of sources 4
Creating nvtracker
Creating nvinfer (SGIE1)
Creating nvinfer (SGIE2)
Creating tiler
Creating nvvidconv
Creating nvosd
Creating nvvidconv
Creating nvv4l2h264enc
Creating qtmux
Creating h264parse
Creating FileSink
Starting pipeline
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/aaeon/deepstream_python_apps/notebooks/configs/…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine open error
0:00:04.541544798 22501 0x217b1590 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 3]: deserialize engine from file :/home/aaeon/deepstream_python_apps/notebooks/configs/…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine failed
0:00:04.560121373 22501 0x217b1590 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 3]: deserialize backend context from engine from file :/home/aaeon/deepstream_python_apps/notebooks/configs/…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine failed, try rebuild
0:00:04.560225725 22501 0x217b1590 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/aaeon/deepstream_python_apps/notebooks/configs/…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.060526413 22501 0x217b1590 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 3]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size
my question is , as my deepstream does not have the resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine,resnet18_trafficcamnet.etlt, files
where do i get these files ?
these files are not present in
/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes$ ls
cal_trt.bin labels.txt mean.ppm resnet18.caffemodel resnet18.prototxt