Hello AI World with my own savedmodel (.pb)

Hello NVIDIA Developers,

I have developed my own neural network for object detection and I have a model in the form of a saved_model.pb(+ saved_model.pbtxt with the differents classes).
I already optimized this model via Tensorflow-TensorRT so that it can be used on my Jetson Nano.

My final goal being to do real time image processing, I still have the before and after processing :

  • Receiving the video stream
  • Transformation of the stream in frame by frame thanks to Gstreamer
  • Processing with the template (what I already have)
  • Transform an image into an image + boxes
  • Recombination of the images to have a video stream again.

The Hello AI World project already realizes all this pipeline from a model already provided (SSD-MobileNet-V2).
So I tried to understand how the project works but I have a bit of trouble to go through the cpp files (for example, the Gstreamer command used, etc.)

I would like to be able to reuse this pipeline but use my own .pb model. Would you know the files I need to modify?
It would be a great help. If you need more information, please do not hesitate!

Thanks in advance

Paul Griffoul

Hi,

First, please convert your model into ONNX format.
Once the file is generated, you can update the model information in detectnet example directly:

Thanks.

1 Like

Hello @AastaLLL,

Thank you for your answer. I will try to do it!
I had another question, again regarding the Hello AI World project.
The project uses Gstreamer and runs a certain command to break down the video stream into images. Unfortunately, I can’t find it in the different files. Indeed, I would like to modify this command because in my case, I manipulate images in 320x240 format at GREY8 format and I should be able to use glimagesink as output and not appsink.

In conclusion, I would like to be able to modify the gstreamer command line and integrate my own parameters.

Thanks in advance for your answer,
Have a nice day

Paul Griffoul

Hi @paul.griffoul, here is the where the GStreamer pipeline is created for reading video file or receiving IP stream:

Here is where the GStreamer pipeline is created for receiving MIPI CSI camera or V4L2 camera:

Hi @dusty_nv,
After reading the Gstcamera.cpp file, I noticed that the grayscale format was not supported as you can see on the image below.

			ss << "h264parse ! omxh264dec ! video/x-raw ! ";
		else if( mOptions.codec == videoOptions::CODEC_H265 )
			ss << "h265parse ! omxh265dec ! video/x-raw ! ";
		else if( mOptions.codec == videoOptions::CODEC_VP8 )
			ss << "omxvp8dec ! video/x-raw ! ";
		else if( mOptions.codec == videoOptions::CODEC_VP9 )
			ss << "omxvp9dec ! video/x-raw ! ";
		else if( mOptions.codec == videoOptions::CODEC_MPEG2 )
			ss << "mpegvideoparse ! omxmpeg2videodec ! video/x-raw ! ";
		else if( mOptions.codec == videoOptions::CODEC_MPEG4 )
			ss << "mpeg4videoparse ! omxmpeg4videodec ! video/x-raw ! ";
		else if( mOptions.codec == videoOptions::CODEC_MJPEG )
			ss << "jpegdec ! video/x-raw ! "; //ss << "nvjpegdec ! video/x-raw ! "; //ss << "jpegparse ! nvv4l2decoder mjpeg=1 ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw ! "; //

		ss << "appsink name=mysink"; 


My sensor sends out the grey8. How can I add it?
The output must also be gimagesink and not appsink.

Without modifications, I get the following error message:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera – found v4l2 device: vi-output, ati320 6-0013
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)tegra-video, v4l2.device.card=(string)β€œvi-output,\ ati320\ 6-0013”, v4l2.device.bus_info=(string)platform:54080000.vi:0, v4l2.device.version=(uint)264649, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera – found 1 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1;
[gstreamer] gstCamera – couldn’t find a compatible codec/format for v4l2 device /dev/video0
e[0;33m[gstreamer] gstCamera – device discovery failed, but /dev/video0 exists
e[0me[0;33m[gstreamer] support for compressed formats is disabled
e[0m[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
e[0;32m[video] created gstCamera from v4l2:///dev/video0
e[0m------------------------------------------------
gstCamera video options:

– URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
– deviceType: v4l2
– ioType: input
– codec: unknown
– width: 320
– height: 240
– frameRate: 30.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[OpenGL] glDisplay – X screen 0 resolution: 1280x1024
[OpenGL] glDisplay – X window resolution: 320x240
[OpenGL] glDisplay – display device initialized (320x240)
e[0;32m[video] created glDisplay from display://0
e[0m------------------------------------------------
glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 320
– height: 240
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

detectNet – loading detection network model from:
– model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
– input_blob β€˜Input’
– output_blob β€˜NMS’
– output_count β€˜NMS_1’
– class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
e[0;31m[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
e[0m[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - UFF (extension β€˜.uff’)
[TRT] desired precision specified for GPU: FASTEST
e[0;33m[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
e[0m[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT] loading network plan from engine cache… networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
e[0;32m[TRT] device GPU, loaded networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
e[0me[0;33m[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
e[0m[TRT] Deserialize required 3447450 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 117
[TRT] – maxBatchSize 1
[TRT] – workspace 0
[TRT] – deviceMemory 35449856
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name β€˜Input’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3 (SPATIAL)
– dim #1 300 (SPATIAL)
– dim #2 300 (SPATIAL)
[TRT] binding 1
– index 1
– name β€˜NMS’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 100 (SPATIAL)
– dim #2 7 (SPATIAL)
[TRT] binding 2
– index 2
– name β€˜NMS_1’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 1 (SPATIAL)
– dim #2 1 (SPATIAL)
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
e[0;32m[TRT] device GPU, networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
e[0m[TRT] W = 7 H = 100 C = 1
[TRT] detectNet – maximum bounding boxes: 100
[TRT] detectNet – loaded 91 class info entries
[TRT] detectNet – number of object classes: 91
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera – onPreroll
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format
e[0m[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)GRAY8, width=(int)320, height=(int)240, framerate=(fraction)60/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
e[0;31m[gstreamer] gstCamera – device /dev/video0 does not have a compatible decoded format


Thanks 

Paul Griffoul

OK, you are using a V4L2 camera, so you would need to add it to gstCamera.cpp instead:

https://github.com/dusty-nv/jetson-utils/blob/ebab1914877a51d4d33fa9b1f01b168adb712a32/camera/gstCamera.cpp#L144

I haven’t used grayscale camera before, so I’m not sure what changes are required. The object detection expects a color image.

Hi @dusty_nv

I managed to get the gray8 working on the Hello AI World project! It was already implemented but not all the way and I had to modify some files.

Now my goal is to replace the β€œSSD_MobileNet v2” model implemented on the Hello AI World project with the model I trained myself.

@AastaLLL told me that the first step is to transform my .pb model into onxx format and then directly modify the Python detectNet_camera.py code.

I am currently looking for a command to do this conversion.

I have my .pb and the labels.txt file but I don’t have the file containing the Engines. Indeed, at the time of the optimization thanks to Tensorflow-TensorRT, I am in dynamic mode, which means that my Tensor Engines are created at the time of the inference and not before. So I will never have an .engine file.

Will this be a problem?

Thanks

Paul Griffoul

Update:

When I convert my .pb model to an .onxx file with the command:

python3 -m tf2onnx.convert --saved-model /path_to_pbmodel --output model.onnx

I get this error message:

2021-04-07 09:39:16.584977: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPREC                                                                                             ATION_WARNINGS=1 to re-enable them.
/usr/lib/python3.6/runpy.py:125: RuntimeWarning: 'tf2onnx.convert' found in sys.                                                                                             modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.co                                                                                             nvert'; this may result in unpredictable behaviour
  warn(RuntimeWarning(msg))
WARNING:tensorflow:From /home/pi21/.local/lib/python3.6/site-packages/tf2onnx/ve                                                                                             rbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use                                                                                              tf.compat.v1.logging.set_verbosity instead.

2021-04-07 09:39:21,257 - WARNING - From /home/pi21/.local/lib/python3.6/site-pa                                                                                             ckages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is depre                                                                                             cated. Please use tf.compat.v1.logging.set_verbosity instead.

2021-04-07 09:39:21.303585: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-04-07 09:39:21.317950: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.318090: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1                                                                                             666] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2021-04-07 09:39:21.318161: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcudart.so.10.2
2021-04-07 09:39:21.321952: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcublas.so.10
2021-04-07 09:39:21.325099: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-04-07 09:39:21.326041: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-04-07 09:39:21.330785: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-04-07 09:39:21.334901: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcusparse.so.10
2021-04-07 09:39:21.335478: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-04-07 09:39:21.335764: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.336019: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.336143: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1                                                                                             794] Adding visible gpu devices: 0
2021-04-07 09:39:21.358093: W tensorflow/core/platform/profile_utils/cpu_utils.c                                                                                             c:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2021-04-07 09:39:21.358646: I tensorflow/compiler/xla/service/service.cc:168] XL                                                                                             A service 0x3c5fead0 initialized for platform Host (this does not guarantee that                                                                                              XLA will be used). Devices:
2021-04-07 09:39:21.358717: I tensorflow/compiler/xla/service/service.cc:176]                                                                                                StreamExecutor device (0): Host, Default Version
2021-04-07 09:39:21.433903: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.434178: I tensorflow/compiler/xla/service/service.cc:168] XL                                                                                             A service 0x3baf20a0 initialized for platform CUDA (this does not guarantee that                                                                                              XLA will be used). Devices:
2021-04-07 09:39:21.434229: I tensorflow/compiler/xla/service/service.cc:176]                                                                                                StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2021-04-07 09:39:21.434609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.434713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1                                                                                             666] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2021-04-07 09:39:21.434788: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcudart.so.10.2
2021-04-07 09:39:21.434869: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcublas.so.10
2021-04-07 09:39:21.434926: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-04-07 09:39:21.434977: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-04-07 09:39:21.435027: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-04-07 09:39:21.435078: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcusparse.so.10
2021-04-07 09:39:21.435130: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-04-07 09:39:21.435288: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.435473: I tensorflow/stream_executor/cuda/cuda_gpu_executor.                                                                                             cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:21.435535: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1                                                                                             794] Adding visible gpu devices: 0
2021-04-07 09:39:21.435619: I tensorflow/stream_executor/platform/default/dso_lo                                                                                             ader.cc:49] Successfully opened dynamic library libcudart.so.10.2
2021-04-07 09:39:23.952042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-07 09:39:23.952126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212]      0
2021-04-07 09:39:23.952188: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0:   N
2021-04-07 09:39:23.952567: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:23.952837: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero
2021-04-07 09:39:23.952985: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 468 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2021-04-07 09:39:23,956 - WARNING - '--tag' not specified for saved_model. Using --tag serve
2021-04-07 09:39:23,956 - WARNING - '--signature_def' not provided. Using all signatures.
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1348, in _run_fn
    self._extend_graph()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1388, in _extend_graph
    tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation Preprocessor/TRTEngineOp_4: Could not satisfy explicit device specification '/device:CPU:0' because no supported kernel for CPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:CPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[GPU] possible_devices_=[]
TRTEngineOp: GPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  Preprocessor/TRTEngineOp_4 (TRTEngineOp) /device:CPU:0

Op: TRTEngineOp
Node attrs: output_shapes=[[?,?,?,3]], workspace_size_bytes=2995095, max_cached_engines_count=1, segment_func=Preprocessor/TRTEngineOp_4_native_segment[], segment_funcdef_name="", use_calibration=false, fixed_input_size=true, input_shapes=[[?,?,?,3]], OutT=[DT_FLOAT], precision_mode="FP32", static_engine=false, serialized_segment="", cached_engine_batches=[], InT=[DT_FLOAT], calibration_data=""
Registered kernels:
  device='GPU'

         [[{{node Preprocessor/TRTEngineOp_4}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/pi21/.local/lib/python3.6/site-packages/tf2onnx/convert.py", line 488, in <module>
    main()
  File "/home/pi21/.local/lib/python3.6/site-packages/tf2onnx/convert.py", line 213, in main
    args.large_model, return_initialized_tables=True, return_tensors_to_rename=True)
  File "/home/pi21/.local/lib/python3.6/site-packages/tf2onnx/tf_loader.py", line 523, in from_saved_model
    _from_saved_model_v1(sess, model_path, input_names, output_names, tag, signatures)
  File "/home/pi21/.local/lib/python3.6/site-packages/tf2onnx/tf_loader.py", line 320, in _from_saved_model_v1
    tf.tables_initializer().run()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 2439, in run
    _run_using_default_session(self, feed_dict, self.graph, session)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 5442, in _run_using_default_session
    session.run(operation, feed_dict)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation Preprocessor/TRTEngineOp_4: Could not satisfy explicit device specification '/device:CPU:0' because no supported kernel for CPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:CPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[GPU] possible_devices_=[]
TRTEngineOp: GPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  Preprocessor/TRTEngineOp_4 (TRTEngineOp) /device:CPU:0

Op: TRTEngineOp
Node attrs: output_shapes=[[?,?,?,3]], workspace_size_bytes=2995095, max_cached_engines_count=1, segment_func=Preprocessor/TRTEngineOp_4_native_segment[], segment_funcdef_name="", use_calibration=false, fixed_input_size=true, input_shapes=[[?,?,?,3]], OutT=[DT_FLOAT], precision_mode="FP32", static_engine=false, serialized_segment="", cached_engine_batches=[], InT=[DT_FLOAT], calibration_data=""
Registered kernels:
  device='GPU'

         [[node Preprocessor/TRTEngineOp_4 (defined at usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]

I optimized my model using TF-TRT and in dynamic mode at the time of conversion.
It seems that the reason is that the TensorEngines are not yet created and are indeed only created at the time of the inference.
What do you think about it?

Thanks

Paul Griffoul

Hi,

There are some compatible problem between tf2onnx and opset version.
Could you try to convert the model into an opset=11 ONNX file?

For example:

python3 -m tf2onnx.convert --saved-model /path_to_pbmodel --output model.onnx --opset 11

If this is not working, could you try to convert the model before using TF-TRT optimization.
Some TF-TRT operations may not be supported by the tf2onnx parser.
You can do the TensorRT optimization after the conversion directly.

Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.