Yes, I followed the instructions indicated.
The output of the application in the attached file above is all output, after this output the application does nothing, as if it were hanging.
For convenience, I transferred the command lines to scripts, files are attached.
Perhaps there is a problem with the model, or am I doing something wrong?
p.s. I have not edited the configs in ‘deepstream_tao_apps/configs’.
buildGazeApp.sh (456 Bytes)
runGazeApp.sh (377 Bytes)
sample_gazenet_model_config.txt (228 Bytes)
I can run it well on my Xavier.
The part log is as below. So, you can check whether gazenet_facegrid.etlt_b8_gpu0_fp16.engine is generated on your side.
...
In cb_newpad
###Decodebin pick nvidia decoder plugin.
In cb_newpad
Deserializing engine from: ./gazeinfer_impl/../../../../models/gazenet/gazenet_facegrid.etlt_b8_gpu0_fp16.engineThe logger passed into createInferRuntime differs from one already assigned, 0x55c6961eb0, logger not updated.
Gaze: -6.359375 -26.625000 16.859375 0.023422 0.079285
Gaze: -6.828125 -27.656250 16.859375 0.023438 0.079712
Frame Number = 0 Face Count = 1
...
Unfortunately, this file is missing:
root@x-mansion:~/ssd/DeepStream# find ./ -name "*gazenet*"
./deepstream_tao_apps/configs/gaze_tao/sample_gazenet_model_config.txt
./deepstream_tao_apps/models/gazenet
./deepstream_tao_apps/models/gazenet/gazenet_facegrid.etlt
root@x-mansion:~/ssd/DeepStream# find ./ -name "*.engine"
./deepstream_tao_apps/models/gesture/gesture.etlt_b8_gpu0_int8.engine
./deepstream_tao_apps/models/faciallandmark/faciallandmarks.etlt_b32_gpu0_int8.engine
./deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
I called the script for downloading models with a preliminary indication of the full path to the tao-converter, according to the instructions in point #2 of the Download paragraph deepstream_tao_apps/apps/tao_others at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
export TAO_CONVERTER=/home/professorx/ssd/DeepStream/tao-converter-jp46-trt8.0.1.6/tao-converter
export MODEL_PRECISION=fp16
cd /home/professorx/ssd/DeepStream/deepstream_tao_apps
./download_models.sh
Could you try again without any script?
I re-executed the script, added the export of the path to the tao-converter to it, it worked.
Perhaps when you start the application for the first time, it is important that the path to the tao-converter is exported to the environment?
export CUDA_VER=10.2
export TAO_CONVERTER=/home/professorx/ssd/DeepStream/tao-converter-jp46-trt8.0.1.6/tao-converter
export MODEL_PRECISION=fp16
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs
cd /home/professorx/ssd/DeepStream/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app
./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt file:///home/professorx/gaze_video.mp4 ./gaze
#./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt file:///dev/video0 ./gaze
Yes, suggest to do that. I can also see it is mentioned in the github.
export TAO_CONVERTER=the file path of tao-converter
Ok, thanks.
I additionally have two questions:
- Is it possible in these examples (registration of gaze, emotions, body position and gestures) to use video capture from a webcam? As far as I understood from the source code of the examples, when processing a video file, the H.264 format is expected, and the webcam provides YUV and MJPG formats.
When I tried to “take off the swoop” - i.e. specifying ‘/dev/video0’ as the source, then I received an error stating that the device could not be opened. - When combining several detectors into a single application (as I already tried to do similar manipulations in the application for “TAO CV Pipeline”, the source code of which was attached in the first message of the topic), you will also have to use detectors in parallel (capture video and transfer it sequentially to each of the detectors used instead)?
For example, I transfer video to the gaze detector, from which I can “pick up” the registered image of the face, and then transfer it to the emotion detector, in order to avoid the situation when these two detectors do the same job and unnecessarily load the CPU and GPU of Jetson ( similarly for body position and gesture detectors).
For 1) , refer to using rtsp mentioned in The LPD and LPR models in the TAO tool do not work well - #22 by Morganh
For 2), do you mean the pipeline runs the gaze detector firstly and then runs emotion detector?
- Sorry, I didn’t have time to test your proposed code for streaming via the RTSP protocol.
- I wanted to merge the paperlines as shown in the diagram in the attached image (gaze and emotion detectors for example).
p.s. I need to test the detectors on recorded videos on a webcam.
p.s. p.s. Perhaps it would be better if I create a separate topic for discussion and questions on deepstream_tao_apps?
Yes, please create a new topic about the discussion.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.