[11/09/2022-15:31:42] [I] [TRT] Detected 1 inputs and 4 output network tensors.
[11/09/2022-15:31:42] [V] [TRT] Deleting timing cache: 274 entries, 432 hits
[11/09/2022-15:31:42] [E] Error: [codeGenerator.cpp::addNodeToMyelinGraph::186] Error Code 4: Myelin (RESIZE operation not supported within this graph.)
[11/09/2022-15:31:42] [E] Error: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
[11/09/2022-15:31:42] [E] Engine could not be created from network
[11/09/2022-15:31:42] [E] Building engine failed
[11/09/2022-15:31:42] [E] Failed to create engine from model.
[11/09/2022-15:31:42] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # ./trtexec --onnx=/home/kookmin-uav/tftrt/onnx-modify/model1.onnx --saveEngine=/home/kookmin-uav/tftrt/trtengine/engine.trt --verbose
I would like to know how to solve this error.
And I also want to know I can use tf2 pre-trained model(ssd-mobilenet v2) for trt.
My using purpose is making the higher FPS for real-time flight of UAV.
And I was using this file convert to trt. onnx-mo.py (4.5 KB)
I think this trying was failed. So I tried to make onnx file again and I got this record.
It has difference record but I’m still afraid it is succeed or not.
Please check I was succeed to convert onnx file or not.
Because after I got this record, I tried to TRT engine again, but I got error msg as belows,
I couldn’t find a way with TensorRT for a while, so I tried another way.
The new way I found it was to start learning at the Jetson Nano.
After learning with the helipad image that I need to detect, I succeeded in converting the onnx model and did object detection. (train_ssd.py)
Now I’m trying to ‘Autonomous-landing’ with a drone, so I have to use this onnx model with the drone’s flight script.
So, next, I have to use a combination of the detectnet-camera.py script and the flight script made in mavsdk, but the following error occurred.
What’s the problem? I can’t find it even if I google it.
The camera should open and object detection, just like when you ran detectnet-camera.py, but it doesn’t work.
Or is there a problem with parallelism in the behavior of detectnet and mavsdk scripts?
I think I need to bring the image that comes through the camera in the detectnet run so that I can turn the mavsdk script to detect objects in flight, how can I get the image?
Yes, I have already tried and also solved this problem.
Now I can display the usb-camera with detection model.
but I could have not used ‘jetson_utils’.
I’ve been using ‘jetson.utils’.
Is it difference?
Hi @jslim0326, there isn’t a difference, both should still work - I recently renamed import jetson.utils to import jetson_utils to be more consistent with Python packages. However import jetson.utils should still work fine.
Thank you for your answer Dusty!
And I have more questions about jetson-inference training part.
I trained the model using jetson-inference’s ssd-mobilenet v1.
But it detected 100% and I guess it is overfitting.
It means I have to train again.
I have a FileNotFoundError, can you tell me how to solve this problem?
The loss was approximately 2.3xx, and there is still an overfitting problem if you look back to the model learned up to this point.
I have to work this out by this week. Help me out!
Yes, We still have a problem.
The problem is that the trained model detects all objects as helipads. Maybe I think it’s overfitting and try to drop the loss, but I’m not sure what to do.
I think the validation loss should be low, so I intentionally put some of the data sets overlapping the training sets, and it fell a little.
But they still catch them all with a helipad.
I want to know how to solve this problem.
Also, I wonder why you should label ‘background’ when learning.
Actually, we need hurry within this year.
So, Please help me asap.