I am testing trt.engine using tlt-infer command.
When I produce the engine using -m 8
tlt-converter -k NHRvZzAwbHFncTk0MXJ0YmwwbXB1bGxhbnU6MjYzNzc2MDctYzQ5MC00NjkxLThkODAtODM0NDc3ZTRhNTNh
-d 3,384,1248
-o dense_class_td/Softmax,dense_regress_td/BiasAdd,proposal
-e /workspace/tlt-experiments/FasterRCNN/resnet34/prune/0.5pruning/trt.fp16.engine
-m 8
-t fp16
-i nchw
/workspace/tlt-experiments/FasterRCNN/resnet34/prune/0.5pruning/resnet34_fp16.etlt
tlt-infer works. But no detection.
If I set -m 1
and do tlt-infer, I have cuMemcpyHtoDAsync failed: invalid argument
error.
How can I solve?
Sorry, I cannot understand why you “testing trt.engine using tlt-infer command”.
The tlt-infer is different from tlt-converter.
My question:
After you get the pruned tlt model, how about its tlt-infer result?
What do you mean by “tlt-infer works. But no detection.”?
tlt-infer is running but no detection on the image. After pruning and retraining, it worked. But it didn’t work after converting to trt.engine.
In the notebook sample, trt engine can be run using tlt-infer.
When -m is set 8, tlt-infer run trt engine. But no detection on image.
When -m is set 1, tlt-infer gave the above mentioned error.
Hi batu_man,
Firstly, may I know your tlt-infer result of section 8?
8. Visualize inferences
In this section, we run the tlt-infer tool to generate inferences on the trained models.
Since no detection at images, label files has no bounding box data and no detection is shown on images.
So, firstly, please make sure tlt-infer in section 8 can work well.
The tlt-infer
tool produces two outputs.
- Overlain images in
$USER_EXPERIMENT_DIR/data/faster_rcnn/inference_results_imgs_retrain
- Frame by frame bbox labels in kitti format located in
$USER_EXPERIMENT_DIR/data/faster_rcnn/inference_dump_labels_retrain
How about your result of “tlt-evaluate” in section 7?