Blank display when object detection is using with multi-threading

new_multithread.py (5.5 KB)

Hello,

I am currently working on my master’s thesis which involves the use of Jetson Nano and Universal Robot. My goal is to establish communication with Universal Robot using ModbusTCP, for which I have created two threads.

Initially, I used the detection script without any threading and was able to see the display with bounding boxes using the following code:

from jetson_inference import detectNet
from jetson_utils import videoSource, videoOutput, cudaToNumpy
from time import sleep, time
width = 1280
height= 720
net = detectNet(model="/home/bsh/jetson-inference/python/training/detection/ssd/models/SSD-100epochs/ssd-mobilenet.onnx", 
		labels="/home/bsh/jetson-inference/python/training/detection/ssd/models/SSD-100epochs/labels.txt", 
		input_blob="input_0", output_cvg="scores", output_bbox="boxes", threshold=0.5)
#output should be first and then input otherwise we can not see the video 
display = videoOutput("display://0", argv=["--width="+str(width), "--height="+str(height)]) 
camera = videoSource("/home/bsh/jetson-inference/python/training/detection/ssd/data/New_Annotation/output2.mp4", 
		     argv = ["--input-width="+str(width), "--input-height="+str(height)])    

while display.IsStreaming():
	img = camera.Capture()
	detections = net.Detect(img)
	timeMark=time()
	fpsFilter=0
	for detection in detections:
		classID = detection.ClassID
		item = net.GetClassDesc(classID)
		print(item, classID)
		#print(detection)
	#display.Render(img)
	#display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))
	dt=time() - timeMark
	timeMark=time()
	fps=1/dt
	fpsFilter=.95*fpsFilter + .05*fps
	display.Render(img)
	display.SetStatus("Object Detection | Network {:.0f} FPS | FpsFilter {} | FPS_Time {} ".format(net.GetNetworkFPS(),str(round(fpsFilter,1)), fps))

However, when I included the same logic for object detection and displaying/rendering the image in a multi-threaded environment, I encountered issues. I have attached the multi-threaded code and I am seeking help to resolve this issue.

Hi,

Do you see the expected output from print(img)

...
def object_detection_process(net, camera, display, detections_queue, detection_flag, command_event, max_frames=5):
    ...
    print(img)
    display.Render(img)
    display.SetStatus("Object Detection | Network {:.0f} FPS | FpsFilter {} | FPS_Time {} ".format(net.GetNetworkFPS(),
                                                                                                  str(round(fpsFilter,1)), fps))

Thanks.

The output is as below:

nvbuf_utils: dmabuf_fd 1280 mapped entry NOT found
nvbuf_utils: NvReleaseFd Failed… Exiting…

– ptr: 0x100fa9000
– size: 2764800
– width: 1280
– height: 720
– channels: 3
– format: rgb8
– timestamp: 23.141000
– mapped: true
– freeOnDelete: false

nvbuf_utils: dmabuf_fd 1284 mapped entry NOT found
nvbuf_utils: NvReleaseFd Failed… Exiting…

– ptr: 0x10124c000
– size: 2764800
– width: 1280
– height: 720
– channels: 3
– format: rgb8
– timestamp: 23.207333
– mapped: true
– freeOnDelete: false

Hi @pranjali.bidwai, I might recommend trying to instantiate your detectNet/videoSource/videoOutput objects inside your detection thread before it starts it’s capture/processing loop.

Due to those nvbuf messages, you could also try building/installing jetson-inference with cmake -DENABLE_NVMM=off ../ build option to see if that helps.

Hello @dusty_nv. Thank you for your solution. Problem is solved. It is working perfectly.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.