I want to combine road following and object detection.I want to detect a lot signs.I don’t solve it.Could anyone help me?Thank you very much.
Maybe you can check below sample to get some ideas:
I have seen this for a long time.But,I can’t combine road following and object detection.Could you help me do this?I want to combine those with trt model.But I couldn’t code it.
In those samples,object following depends on enging model.I want to use trt.pth which likes road following sample.
Could you explain more about your use case?
Do you want to run the two DNN models with the same input concurrently?
I want to combine road following and object detection.In the road following,I can detect the object.I use jetson-inference retraining the object detection model.But I don’t know how to combine them.
YES.I have finished the tutorial about road following.And I don’t find the tutorial about retrained object detection.Where can I get it?
You can find the below tutorial to retrain a model via transfer learning:
To run two models with the same input, you can create two TensorRT engines for each.
And feed the data input to the engines separately.
... # load the object detection network net1 = jetson.inference.detectNet(opt.network1, sys.argv, opt.threshold) net2 = jetson.inference.detectNet(opt.network2, sys.argv, opt.threshold) # create video sources input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv) # process frames until the user exits while True: # capture the next image img = input.Capture() # detect objects in the image (with overlay) detections1 = net1.Detect(img, overlay=opt.overlay) detections2 = net2.Detect(img, overlay=opt.overlay) ...
I have a question.Do you have try to load two model in jetson nano B01.Do it support resnet18 model and ssd-mobilenetv1 model to load in the same py file?
net1 = jetson.inference.detectNet(opt.network1, sys.argv, opt.threshold)
net2 = jetson.inference.detectNet(opt.network2, sys.argv, opt.threshold)
Do you mean it is possible to load two models on jetson nano b01?
Do you mean any error or issue for it?
It’s possible to run two different models in the same process.
But please noted that you will need to create separated TensorRT engine, context, runtime for each.
what is this meaning?But please noted that you will need to create separated TensorRT engine, context, runtime for each.
Could you give more help?
Although it isn’t jetson_inference, you can find an example with TensorRT in the below comment:
The example serializes a TensorRT engine and deploys it to given hardware (GPU or DLA).
When input more than one hardware, there are multiple engines loaded and running in threads.
If you look around DeepStream examples, you can find some example use 3 models in a process.
model 1: car, person, bicycle, road signs
model 2: car color
model 3: car vendor
The example can figure out white chevy truck.
This example definitely runs on the Jetson Nano development kit, 2GB model as well.
I wanna look it.
Could you give me the link?
Is it?“GitHub - deepstreamIO/deepstream.io: deepstream.io server”
You can use DeepStream SDK through SDK Manager,
or install it via deb/tar.
And docker container as well.
Follow ‘Get started’ at this link:
Fine.Thank you very much!
I will go to learn it.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.