Hi Dusty. builds/tests perfect. Thx so musch.
I tried to run the python3 image recpg DIY example in Jupyter NB but got errors on --network as a conflicting option string. Whats the guideline to run it successfully in JupNB? Thx
Hi @amirtousa, I havenât personally tried running it in Jupyter Notebook - are you trying to run the code itself in a cell, or to call the script? What is the contents of the cell you are running, and what is the error it throws? Thanks!
I Dusty, I think I need to alter the code a little bit to set the video input, output, Network etc. Will do that and let you know. Good exercise for me.
Have another question. There is a flag motion-trace=true in gstreaming object properties which causes a visual motion trace to be seen in the output ssd video. Seems like its set to false by default. How do I set that to true? How do I access/set to the details properties of the objects in Detections list from detectnet.py ? A python sample code would be extremely helpful if available. Thx so much Dusty.
Which GStreamer element are you looking at with the motion-trace
property? I donât see that for the omxh264dec
, omxh264enc
, or nvarguscamerasrc
element. Maybe it is for other elements?
For the detection results, see the list of jetson.inference.detectNet.Detection
members here:
https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.inference.html#detectNet
You can access all of those, and the ones you can set are: Left, Right, Top, Bottom, ClassID, Confidence, Instance
Motion-Detection-Element based on this link: GStreamer Background Subtraction Camera-Based Motion Detection Plugin - RidgeRun Developer Wiki
I am trying to buffer coordinates of the detected objects from frame to frame so I can draw a movement trajectory. If I understand it right, the link above says by setting motion-trace=true this happens in the output display as its running and detecting objects. Connects and draws a path for obj-1 for instance throughout all frames till the stream ends.
That is for a custom plugin element that RidgeRun authored called gst-motiondetect
- it isnât a standard property of all GStreamer elements. You could try installing/testing it with the example GStreamer pipelines they provided though.
Thx so much Dusty. Got it, I thought there was a gap. Any suggestion how to achieve the same âtraceâ functionality with std GStreamer elements?
Connect obj-1 from frame 1, to same obj-1 to frame 2, to 3, and so on. Same particular obj-1. Repeat for other objs. Appreciate your help.
Lately I have received a number of emails mentioning new versions of various kind and therefore I would just like to ask what is a recommended way to upgrade from a previous version of âHello AI Worldâ?
I have two Nanoâs running fine 24/7, working âsince longâ. Iâm using the âssd-inception-v2â model. When I installed it I did build the project from source
- is it just to follow the same procedure to build the project as before and will that give me the latest?
- should I remove or rename the old cloned repo first?
- would it give me any benefits upgrading (better and more models, improved performance)?
- do you now have YOLO also implemented?
Thanks in advance,
Best regards, Walter
Hi @amirtousa, DeepStream SDK provides an object tracker which may provide the functionality you are looking for. DeepStream is also based on GStreamer, so may be easier to integrate other elements like that gst-motiondetect one.
Hi @walter.krambring, yes the build procedure is the same. You would just need to re-clone to download the latest source, or do a git pull origin master
on your existing install.
It might be advisable to rename your existing install to keep a backup, especially if you have made any modifications or changes to the project. Then you can clone the repo again to get the latest sources.
See the new changes and updates implemented here: https://github.com/dusty-nv/jetson-inference/blob/master/CHANGELOG.md#july-15-2020
I have not implemented YOLO, I prefer to stick with SSD-based detection architecture since the workflow and performance is smooth (i.e. train in PyTorch onboard Jetson â export to ONNX â run inference with TensorRT)
Thank you very much for detailed guidelines!
A bit off topic maybe but in the mean time I have installed and tested the YOLO v4 setup described by JK Jung here: GitHub - jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
It works pretty well and utilizes the GPU. Also the accuracy seems to be improved if I compare with the previous YOLO v3. I have some tricky images that other models have problems with but detection is now good also with the yolov4-tiny-416 while it was required to use the full version with the previous
I guess many people, or at least some, use the Jetson in home video surveillance systems. Like I do. It would be great to have a model trained on images typically from such camera devices where the viewing angle is a bit from above instead of images where the viewing angle is more horizontal. Security cameras are normally installed a bit higher up from the ground
Anyway, thanks for a great support and great products!
Best regards, Walter
to get the same accuracy
they areuses
Hello. How do I know what precision my model was trained to in PyTorch?
And how can I make the graph?
IIRC I made that chart manually by extracting the accuracies from the terminal. These get printed out at the end of each training epoch (acc@1
)
Hi @rodrigoavalos, imageNet does itâs own command-line parsing, so donât use argparse module to do it. imageNet doesnât receive the command line and default options in that case.
Alternatively you can code them like so:
net = jetson.inference.imageNet(opt.network, argv=['--model=models/chest-radiography/Resnet18-70/resnet18.onnx', '--labels=data/chest-x-ray/labels.txt', '--input-blob=input_0', '--output-blob=output_0'])
Unfortunately there isnât an API for that, sorryâŠI will add it to my todo list
There currently is not, as there isnât an API for getting the other results. That text is printed out in the low-level code inside c/imageNet.cpp. You are welcome to modify it to return the top-K classification results - I just havenât had time to do it, sorry about that.
In the tutorial coding your own image recognition program (Python)
How can I upload my custom model?
parse the command line
parser = argparse.ArgumentParser()
parser.add_argument(âfilenameâ, type=str, help=âfilename of the image to processâ)
parser.add_argument(âânetworkâ, type=str, default=âgooglenetâ, help=âmodel to use, can be: googlenet, resnet-18, ect.â)
args = parser.parse_args()
load an image (into shared CPU/GPU memory)
img = jetson.utils.loadImage(args.filename)
load the recognition network
net = jetson.inference.imageNet(args.network)
net = jetson.inference.imageNet(opt.network, argv=[ââmodel=models/chest-radiography/Resnet18-70/resnet18.onnxâ, ââlabels=data/chest-x-ray/labels.txtâ, ââinput-blob=input_0â, ââoutput-blob=output_0â])
I did it myself and got an error. Please help @dusty_nv
What error did you get? You might want to try --input_blob
and --output_blob
instead (note the underscores), but I think it looks fine. Maybe it canât find the files at that path?