Possible to use deepstream-app model for predictions on own dataset?

Description

I would like to make predictions on my own dataset of images (resolution: 1656X1034) using the model used as Primary Detector in Deepstream (the resnet10.caffemodel_b1_gpu0_int8.engine,resnet10.caffemodel,resnet10.prototxt). Is it possible to do so? Thank you!

• Hardware (V100)

Hi,

This looks like a Deepstream related issue. We will move this post to the Deepstream forum.

Thanks!

1 Like

@gaia17 Yes,you can use the model directly in your dataset. But we do not ensure the accuracy in your dataset. It’s just trained for the demo, you can also add more dataset to train it. Thanks

I am loading the caffe model with OpenCV (code below) to run inference of a image. I am resizing the image to 640,368 as mentioned in prototxt file. Once run the inference, the output is an array of dimensions 1X4X23X40. Is this the coverage map? Where can I find the bboxes? Thank u!

The code:
import cv2
import numpy as np
labels = [“Car”,“Bicycle”,“Person”,“Roadsign”]

MODEL_FILE = ‘resnet10.prototxt’
PRETRAINED = ‘resnet10.caffemodel’

load DNN model

nn = cv2.dnn.readNetFromCaffe(MODEL_FILE,PRETRAINED)
print("[Status] loading model")

read image from disk

image_name = ‘208_212_1631260113.jpg’
image = cv2.imread(image_name)
image_height, image_width, _ = image.shape
print(f"[Status] loading image")
print(f"Image resolution: {image.shape}")

create blob from image

print(f"[Status] creating blob from image")
blob = cv2.dnn.blobFromImage(
image, ### image
1, ### scalefactor
size = (640,368), ### size required by model
mean=(104, 117, 123), ### mean subtraction
swapRB = True)

Run inference

nn.setInput(blob)
output = nn.forward()

Hi @gaia17 ,Could you please open a new topic for this new question? We will set the first question to solved status and anylize the new question in the new topic. Thanks