new_multithread.py (5.5 KB)
Hello,
I am currently working on my master’s thesis which involves the use of Jetson Nano and Universal Robot. My goal is to establish communication with Universal Robot using ModbusTCP, for which I have created two threads.
Initially, I used the detection script without any threading and was able to see the display with bounding boxes using the following code:
from jetson_inference import detectNet
from jetson_utils import videoSource, videoOutput, cudaToNumpy
from time import sleep, time
width = 1280
height= 720
net = detectNet(model="/home/bsh/jetson-inference/python/training/detection/ssd/models/SSD-100epochs/ssd-mobilenet.onnx",
labels="/home/bsh/jetson-inference/python/training/detection/ssd/models/SSD-100epochs/labels.txt",
input_blob="input_0", output_cvg="scores", output_bbox="boxes", threshold=0.5)
#output should be first and then input otherwise we can not see the video
display = videoOutput("display://0", argv=["--width="+str(width), "--height="+str(height)])
camera = videoSource("/home/bsh/jetson-inference/python/training/detection/ssd/data/New_Annotation/output2.mp4",
argv = ["--input-width="+str(width), "--input-height="+str(height)])
while display.IsStreaming():
img = camera.Capture()
detections = net.Detect(img)
timeMark=time()
fpsFilter=0
for detection in detections:
classID = detection.ClassID
item = net.GetClassDesc(classID)
print(item, classID)
#print(detection)
#display.Render(img)
#display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))
dt=time() - timeMark
timeMark=time()
fps=1/dt
fpsFilter=.95*fpsFilter + .05*fps
display.Render(img)
display.SetStatus("Object Detection | Network {:.0f} FPS | FpsFilter {} | FPS_Time {} ".format(net.GetNetworkFPS(),str(round(fpsFilter,1)), fps))
However, when I included the same logic for object detection and displaying/rendering the image in a multi-threaded environment, I encountered issues. I have attached the multi-threaded code and I am seeking help to resolve this issue.