In GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, it will run inference against H264 or JPEG input file. Please check if it works for you.
There is no problem with local file reasoning, that is, there is a problem with network streaming media. After reading the relevant processing of gsstream and deepstream, I still can’t solve it
If that is the case as you mentioned, it is not an issue for TLT model. Suggest you search info in Deepstream forum or create a new topic in it for more help.
@Morganh
The last time you compiled the Yolo engine with the TLT command, you can use it. Now the same command reports an error. Why?
./tlt-converter -k tlt_encode -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 ./models/yolo3/yolov3_resnet18.etlt
[ERROR] Number of optimization profiles does not match model input node number.
Aborted
or
./tlt-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 ./models/yolo3/yolov3_resnet18.etlt
[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (2, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
or I reinstall the plug-in and still report an error, as shown below:
[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
[ERROR] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)
Can you create a new topic for discussion?