I am trying to create a pedestrain detection app with Jetson Nano 4GB and jetson-inference using ped-100 network.
This is one of my screenshot. As you can see it cant detect that pedestrain. My threshold is 0.5. How can I make my app detect this kind of views?
Do you have a database for your use case?
If yes, you can retrain a model with the below guidance.
As I see from What is the Dataset in jetson-inference Ped-100 - #3 by dusty_nv there is no distributed dataset. Even if I create my own dataset, how can I retrain ped-100?
Hi @hakanulusoy32, that model was based on an older/outdated detection DNN architecture. These days I would recommend looking into the production-quality pretrained models from TAO Toolkit, like the PeopleNet model:
You can run these models with DeepStream or Triton Inference Server.
Alternatively, if you wish to use jetson-inference you can use the included SSD-Mobilenet-v2 model that comes with jetson-inference (it has person class from MS COCO dataset)
Thank you for advice.
I have tried SSD-Mobilnet-v2 for pedestrian and result was not good. ped-100 have better result.
How much FPS I can get with 480p rtsp while using PeopleNet on Jetson Nano 4gb?
I found this table Overview — TAO Toolkit 3.0 documentation which directs 11 FPS but there is int8 precision for jetson nano. Is it possible to use PeopleNet int8 precision with Jetson Nano?
I also havent found info about if I reduce the resolution to 360p or lower, can FPS increase?
Please give me a tutorial link like jetson-inference if exists
Jetson Nano doesn’t support INT8, so I’m guessing that table means FP16 for Nano.
There are instructions for running PeopleNet model with DeepStream on this page - then you can check the performance:
Here are Python samples for DeepStream as well: https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
Since you are using RTSP streams, DeepStream may be a good fit for you since it is frequently used with RTSP inputs/outputs.
I am getting with my jetson nano 4GB while I am testing
deepstream_test1.py . How can I make it support?
WARNING: INT8 not supported by platform. Trying FP16 mode.
Which one is correct?
Another problem, I can not get any output when I run “deepstream_python_apps/apps/deepstream-test3.py” with a RTSP sourse.
Hi @hakanulusoy32, Jetson Nano doesn’t support INT8 (FP16/FP32 only)
Please post a new topic about your DeepStream-related questions or post to the DeepStream SDK forum:
I am sorry to being noob but I am wondering. Is there a possibility that I can use PeopleNet like jetson inference? Jetson inference is easy to use, I could easyly impelement jetson inference into my Python code. Is it possible for PeopleNet also?
Oh sorry! That was a typo :-/ (fixed that above)
Jetson Nano / TX1 / TX2 do not support INT8. INT8 inferencing isn’t supported until Xavier.
Unfortunately I don’t have support for the TAO models like PeopleNet in jetson-inference, although I will look into it. You can run those models through DeepStream or with tao-toolkit-triton-apps.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.