I am using OpenCV “detailEnhance” filter to enhance details on a real-time video stream. I need to process 30 frames/sec, but in my current implementation, based on VisionWorks-1.6 sample “nvx_demo_video_stabilizer” I can process only 6 frames/sec. Jetson TX2 is in high speed mode. What would be the best approach to speed-up this process?
It sounds to me as if something’s not right.
If you run a profiler, where is the time spent?
My guess would be your processing mechanism somehow ends up synchronizing/stalling/transferring between GPU and CPU in some place where it shouldn’t.
Thank you, snarky! I am a new user of Jetson TX, did not use profiler yet :(, but should do it definitely asap.
I only noticed when I use Canny edge detector everything looks good, processing time is fast (30 frames per/s), but when I replace Canny with any of so-called Non-Photorealistic Rendering filters (edgePreservingFilter, detailEnhance, pencilSketch, stylization)
everything is 5-6 times slower.