Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) :Jetson
**• DeepStream Version:6.1.1
**• JetPack Version (valid for Jetson only):L4T 35.1
**• TensorRT Version:8.4.1.5
**• NVIDIA GPU Driver Version (valid for GPU only):
**• Issue Type( questions, new requirements, bugs):questions
My development environment is jetson NX (deepstream 6.1.1) and I want to see how image classification works as a PIE.
I used Deepstream-test1-app for testing, and image classification was the main model. With RESTNET18 trained with TAO, there were no issues on the test set. Put the model in the directory and configure the file as follows.
The program can run, but the frame_meta->obj_meta_list is empty.
I want to know:
(1) How to obtain classification results from frame_meta.
(2) Is there an error in the configuration file?
My model was trained from TAO and got final_model.etlt, which is OK to test in TAO environments. With deepstrem, final_model.etlt_b16_gpu0_fp16.engine was generated. At this time, the detection results in DeepStream are not correct.
Want to know,
(1) Does TAO’s environment have to be the same as Jesson’s? Now TAO is Ubuntu 18 and Jetson is Ubuntu 20
(2) Whether there are other things that have not been noticed in the process
The tao version I used is 3.22, referring to the classification.ipynb file, all steps are normal, but I can’t get results in deepstream 6.1.1 and deepstream 6.0. Can you know the specific reason? Is it necessary to refer to multi_task_tao?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks