Engine build time from custom dataset

In reference to this video:

At time stamp 19:45, after training the model and converting it to an ONNX, the detectnet command is issued and almost immediately the live camera feed of the new dataset appears. When I am using the base Jetson Nano on a newly trained ONNX file, an engine file is built when detectnet is used and then the video feed appears after the file creation. This does not necessarily take a long time but in the video, this does not seem to occur / happens instantaneously. Is there anyway to speed up the engine file creation?

Hi @user10447, the first time you run a new model, it’s expected for TensorRT to take a few minutes to optimize it. There isn’t really a way to significantly speed this up, however it only happens the first time each model is run, as the optimized model is cached to disk afterwards (so subsequent runs should start much faster)

In the video, I probably just fast-forwarded this part so the viewers weren’t staring at a blank screen for several minutes :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.