.onnx file convert to trt got error

Hello,

I’ve tried to convert to trt from onnx file.

  • pb model is ssd-mobilenet v2 by tf2
  • Jetson TX2
  • jetpack 4.6.2
  • TensorRT 8.2.1.8
  • CUDA 10.2.300
  • CuDNN 8.2.1.32
  • Tensorflow 2.6.2
  • python 3.6.9

==========================================
.
.
.
[11/09/2022-15:31:42] [I] [TRT] Detected 1 inputs and 4 output network tensors.
[11/09/2022-15:31:42] [V] [TRT] Deleting timing cache: 274 entries, 432 hits
[11/09/2022-15:31:42] [E] Error[4]: [codeGenerator.cpp::addNodeToMyelinGraph::186] Error Code 4: Myelin (RESIZE operation not supported within this graph.)
[11/09/2022-15:31:42] [E] Error[2]: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
[11/09/2022-15:31:42] [E] Engine could not be created from network
[11/09/2022-15:31:42] [E] Building engine failed
[11/09/2022-15:31:42] [E] Failed to create engine from model.
[11/09/2022-15:31:42] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # ./trtexec --onnx=/home/kookmin-uav/tftrt/onnx-modify/model1.onnx --saveEngine=/home/kookmin-uav/tftrt/trtengine/engine.trt --verbose

============================================

I would like to know how to solve this error.
And I also want to know I can use tf2 pre-trained model(ssd-mobilenet v2) for trt.
My using purpose is making the higher FPS for real-time flight of UAV.
And I was using this file convert to trt.
onnx-mo.py (4.5 KB)

Thank you!

Hi,

RESIZE operation not supported within this graph

Based on the error, there are some non-supported layers in your model.
If you are using TF2, please check if the below tutorial can help with your use case:

Thanks.

Hello, again!

I solve first error.
And then tried again but have still problem.

======================================================

[11/09/2022-16:44:57] [E] Error[2]: [pluginV2Runner.cpp::execute::265] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. )
[11/09/2022-16:44:57] [E] Error occurred during inference
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # ./trtexec --onnx=/home/kookmin-uav/tftrt/onnx-modify/model2.onnx --saveEngine=/home/kookmin-uav/tftrt/trtengine/engine.trt --verbose

=============================================

Do you know what I have to do?
Is it still same problem??

Thank you a lot!

Hi,

Would you mind sharing more error logs with us?
It’s expected that TensorRT shows some information before returning the status failure (Assertion status == kSTATUS_SUCCESS failed.).

Thanks.

try2make_trt_engine_error.txt (904.5 KB)

I attached whole log file!
I’m looking forward hearing you :)

Thank you!

onnx succ.txt (921.9 KB)

I think this trying was failed. So I tried to make onnx file again and I got this record.
It has difference record but I’m still afraid it is succeed or not.
Please check I was succeed to convert onnx file or not.
Because after I got this record, I tried to TRT engine again, but I got error msg as belows,


Also I wonder how solve this error, too.

Thanks!

Hi,

Based on the txt output, it seems the engine is serialized but not able to inference.
Do you get the serialized /home/kookmin-uav/tftrt/trtengine/engine.trt file?

Thanks.

Hello,
Thank you for your answer!

I’ve checked it.
I think we got the serialized.
image

And you would please check again that I attached files?
I tried to run ‘detectnet-camera.py’ with our onnx model.
It opened the camera but can’t detect the hmark.
My model have to detect hmark.

run_detecnet-camera.txt (405.1 KB)
detectnet-camera.py (3.8 KB)

Thanks a lot!!!

Hi,

Have you verified the ONNX model before?
Do you get the expected accuracy when running the model with other frameworks?

Thanks.

Hello.
I couldn’t find a way with TensorRT for a while, so I tried another way.
The new way I found it was to start learning at the Jetson Nano.
After learning with the helipad image that I need to detect, I succeeded in converting the onnx model and did object detection. (train_ssd.py)

Now I’m trying to ‘Autonomous-landing’ with a drone, so I have to use this onnx model with the drone’s flight script.
So, next, I have to use a combination of the detectnet-camera.py script and the flight script made in mavsdk, but the following error occurred.

What’s the problem? I can’t find it even if I google it.
The camera should open and object detection, just like when you ran detectnet-camera.py, but it doesn’t work.

Or is there a problem with parallelism in the behavior of detectnet and mavsdk scripts?
I think I need to bring the image that comes through the camera in the detectnet run so that I can turn the mavsdk script to detect objects in flight, how can I get the image?

Also, we have error as belows, too.

Please help me.
Thanks!


offboard_web_onxx.py (3.9 KB)

Hi,

Could you check if you can open the display with the following script first?

Thanks.

Hello,
Yes, I have already tried and also solved this problem.
Now I can display the usb-camera with detection model.
but I could have not used ‘jetson_utils’.
I’ve been using ‘jetson.utils’.
Is it difference?

Thank you.

Hi @jslim0326, there isn’t a difference, both should still work - I recently renamed import jetson.utils to import jetson_utils to be more consistent with Python packages. However import jetson.utils should still work fine.

Thank you for your answer Dusty!
And I have more questions about jetson-inference training part.
I trained the model using jetson-inference’s ssd-mobilenet v1.
But it detected 100% and I guess it is overfitting.
It means I have to train again.

I’d love to know how to avoid overfitting problem,

  • loss was about 2.xxx ~ 3.xx. I want to make it down as 1.xx
  • Original dataset has approximately 16,000 but I only used half cause of memory problem. I want to use all.
  • But I think it has also memory leak problem because epoch always down near 400. I also want to know which epoch is reasonable for this case.

Please help me.
Thank you!!!

@dusty_nv Hello,
I have a FileNotFoundError, can you tell me how to solve this problem?
The loss was approximately 2.3xx, and there is still an overfitting problem if you look back to the model learned up to this point.
I have to work this out by this week. Help me out!

Sorry for the late response, have you managed to get issue resolved or still need the support? Thanks

Hello,
Yes, We still have a problem.
The problem is that the trained model detects all objects as helipads. Maybe I think it’s overfitting and try to drop the loss, but I’m not sure what to do.
I think the validation loss should be low, so I intentionally put some of the data sets overlapping the training sets, and it fell a little.
But they still catch them all with a helipad.

I want to know how to solve this problem.
Also, I wonder why you should label ‘background’ when learning.
Actually, we need hurry within this year.
So, Please help me asap.

Thank you!!

Hi,

For the memory issue, you can train the model on a desktop GPU and copy the model into Jetson afterward.
So you should be able to use all the dataset and might help on accuracy.

More, it’s also recommended to use our TAO toolkit as well.

Thanks.

I had no memory problems and I already did it on my desktop using GPU.
Well, let’s try the TAO toolkit.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.