I’m currently working on encoding some result images from deepstream and send them to other applications. I’m trying to use GPU (in my case, T4) to accelerate the encoding process and reduce CPU cost, and after some searching I find NvJpeg.
But after some working around with NvJpeg, I found that NvJpeg seems to use CUDA to do the encoding, so is there a way to use the NvENC chip inside the T4 to do the encoding?
nvenc/nvdec hardware is for encoding/decoding video, not images. I’m not aware of any way to use the motion video hardware to encode still images. Certainly NVIDIA doesn’t provide any way/libraries/apis to do that. If there were, and it were useful, there would be no reason to introduce NVJPEG hardware and NVJPEG as a separate engine (in Ampere).
Thanks for your quick reply! Such a pity that we can’t do jpeg encoding with nvenc hardware, any plan to implement this?
I have one more question with nvjpeg. As I mentioned earlier, what I try to do is to use nvjpeg to do jpeg encoding after I recieve results from deepstream infer, but the encoding speed increases significantly.
When I use nvjpeg standalone, the encoding speed is around 10ms, but when I try to cascade it after deepstream inference, the encoding speed increases to 200ms (GPU utilization rate is around 90%).
Thanks for your reply! We did some further testing and my colleage posted the result above. As you can see, there’s huge impact to nvJpeg while DS is running.
We plan to do some further testing using profiler, and meanwhile, is there any professional encoding device/chip we can use?
You might be able to achieve what you want using multiple GPUs. I’m not aware of any professional standalone encoding device/chip, but I imagine they exist.