• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version 18.104.22.168
• Issue Type( questions, new requirements, bugs) question
I’m currently working on implementing a detector and classifier in DeepStream, but I’ve encountered a particular problem. My classifier, which is built using TAO and utilizes Efficientnet-B0, performs admirably when used independently outside of the DeepStream environment. However, when I integrate it into the pipeline, I’m experiencing a noticeable decrease in performance. I suspect this issue might be related to the net scale factor and offset. To address this, I plan to fine-tune the net scale factor and offset parameters in a configuration file named “config_infer_secondary.txt” to optimize the classifier’s performance within the DeepStream pipeline. Could you provide guidance on how to accurately set these parameters in that file?
I used following technique but no improvement
for data in data_loader: image, _ = data for c in range(3): channel_mean[c] += image[0, c, :, :].mean() channel_std_dev[c] += image[0, c, :, :].std() channel_mean /= len(train_dataset) channel_std_dev /= len(train_dataset) # Calculate unscaled standard deviation and its mean unscaled_std = channel_std_dev * 255 unscaled_std_mean = unscaled_std.mean() # Calculate net scale factor and offsets net_scale_factor = 1 / unscaled_std_mean offsets = (channel_mean * 255).tolist()