How to plt the precessed tensor?

hello, i use the nvdsprecess plugin to infer video, when i use DEBUG_TENSOR to save processed tensor to bin file, how can i plt the file to png or jpg? my network-input-shape is 1:3:704:704 and use fp16 to infer

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

The tensor data is not image data(float type, not 0~255 range…), if you want to convert the data to image data, you need to define your own mapping algorithm and then use the png pr jpg encoder to encode the converted data to image files.

This is not DeepStream related. You can google by yourself.

if it can not convert to image files, how can i ensure the tensor data is right after preprocess?

After the preprocess the data is no longer image data even without DeepStream. For DeepStream case, you can dump the input tensor data directly. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

my pipeline is nvdspreprocess->nvinfer .i know your means, in i refer https://forums.developer.nvidia.com/uploads/short-url/byXzsrCQMrPDx5qTGtfLiZhCUBO.txt to dump input before trt, and i found the input is error, so i want to check the input in nvdsproprocess plugin is right? but the debug lib and debug_tensor is save .bin file, i can not ensure the input is right. As you saying that, the range of bin file is not [0,255], how should i map the [min,max] to [0,255], i tried max-min norm but failed! can you give some addvice to me?


when i parse the bin file, some location data is nan,

When you map the float data to [0,255], you already change the data itself, how could you know whether it is correct or not by the changed data? I don’t understand what you want to do.

Are you sure that your model’s input layer data type is FP16?

What is the “tensorout_batch_597.bin” 's size? 2973696 bytes?

i am sure the data type is FP16, the bin size is 2973696 bytes

So you’ve already get the tensor data, you need to debug with the nvinfer and nvpreprocess source code to investigate why some data is “nan”.

ok,i got it! Thank you for your answser~

when i enable “debug_lib” in deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/nvdspreprocess_lib/nvdspreprocess_impl.cpp, the saved bin file is 0 kb and data shape is 0, why cauesd it?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.