Execution of driveworks sample without using cuda and tensorrt

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.10.0
[1] DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[1] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[1] DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
2.1.0
[1] other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
Hi team, there is object detector tracker sample. In the sample, cuda and tensorrt has been used. But what if I want to execute the code without using cuda and tensorRT? Are there any ways to do so? I want my application to run on CPU only

Dear @akshay.tupkar,
Are you asking like if there is any switch or parameter in code that makes the sample to run inference on CPU? If so, it is not possible. The inference would always run on GPU/DLA using the DW DNN framework.

If you want to add custom code in the sample, you can get the input data on CPU like below and feed into your network on CPU implementation

CHECK_DW_ERROR(dwDataConditioner_prepareDataRaw(m_dnnInputDevice, &rgbaImage, 1, &m_detectionRegion,
                                                          cudaAddressModeClamp, m_dataConditioner));
          // Copy input back to CPU after preprocessing 
CHECK_CUDA_ERROR(cudaMemcpy(m_dnnInputputHost, m_dnnInputDevice,
                                    sizeof(float32_t) * m_totalSizeInput, cudaMemcpyDeviceToHost));

//Add CPU inference code