I retrained my detection model using transfer learning. I don’t think this will change the output layer and input layer of the uff file, but I replaced the uff-file in objectDetector_SSD with my uff file and got the following error :
Using winsys: x11
Creating LL OSD context new
0:00:01.116309090 7904 0xba76cd0 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:01.116429658 7904 0xba76cd0 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:02.340588366 7904 0xba76cd0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): UffParser: Validator error: Cast: Unsupported operation _Cast
0:00:02.364366344 7904 0xba76cd0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse UFF file: incorrect file or incorrect input/output blob names
0:00:02.367971775 7904 0xba76cd0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:02.368988445 7904 0xba76cd0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:02.369046489 7904 0xba76cd0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt,
NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed
Is there something wrong with the way I configured it? Is there any solution?
Here is the content of my config_infer_primary_ssd configuration file:
Thank you,AastaLLL.
I used Tensorflow 1.14.0, I would switch to 1.13.1 for training, and test the pb file in TensorRT.
Which config.py file needs to be changed after generating the uff file? Can you tell me?
Thank you so much, best wishes for you.
Hi,
I re-used tensorflow1.13.1 migration learning ssd_inception_v2_coco_2017_11_17, but the previous error still appears. What I want to know is how to test that the pb file works normally on TensorRT. Can you tell me how to do it?
Thank you very much.
Sorry to have restored you so long.
I can also convert this pb file into a uff file. But this uff file cannot be used in deepstream SDK. Can you try it in SSD?
Thank you for your reply.
I used two classes during training, they are bus and jeep.
Also, I want to ask a question. Is there any other way to add my own model to deepstream-SSD?
Or directly get the UFF file that deepstream-SSD can use?
Since TensorFlow operations are updated, there are some incompatibility in the TensorRT.
To fix “Unsupported operation _Cast” issue, please update the config.py with following change: