There are 3 sub topic in this title, list as text in user guide and question, the text is extract from DeepStream User Guide document(ID:DU-08633-001_v03.1).
Text: calls the “execute” function of each module serially when executing DeviceWorker->start()
DeviceWorker will be in a suspended state until the user pushes video packets into it.
Question: Module::execute() should been call after DeviceWorker->start() and DeviceWorker->pushPacket() both called? There are upstream and downstream relations between modules in one Flexible Pipeline, so Module::execute() should serially called one by one in the same order they been add to DeviceWorker. After the last module’s execution function been called, will it go to call again from the first one?
Text: IStreamTensor should be created with createStreamTensor.
Question: After reading the code “sample/nvDecInfer_detection/parserModule_resnet18.h”, I can guess: StreamTensor object should be create and destroy by upstream/input module, and the downstream/output module should only allowed to read data from “const std::vector<IStreamTensor *>& in” parameter, right?
Text: Be sure to pass or update the Trace information, which includes information about the frames index, video index, etc.
Question: All customer module is downstream of inference module in the DeepStream sample code, so what is the TRACE_INFO::boxInfo if customer module is downstream of ColorSpaceConvertor module, should be meanless? In sample/nvDecInfer_detection/parserModule_resnet18.h file, the code seems maintain the BBOXS_PER_FRAME structure contained in OutputTensor CPU Data inside ParserModule::execute() function, where is the code to pass or update the Trace information, which one to use between TRACE_INFO and BBOXS_PER_FRAME to transfer the frame and video index?
Text: For the input of module, you can use functions with the “get” prefix to acquire the information and data.
Question: If we want to develop a cutomer module, and it’s input module is come from third part, like the DeepStream embed ones. Then if we touch the input tensor, to check the data is in CPU or GPU memory, should we call getGpuData or getCpuData after call getMemoryType first, or there should be some document to describe the full and detailed content of the output tensor published with the third part module?