Problem With TLT 3.0 Container Stopping

Followed all the steps to successfully install tltv3 and the launcher to a virtual enviroment with python version 3.6.13 and docker version 19.03.6. I configured the launcher with mount and docker options and created a spec file for a pretrained resnet18 classification model downloaded from ngc. Then i tried to run a tlt classification (command shown below), it appears the launcher creates a docker container instance which then immediately stops and closes (output shown below). I tried running with different spec files and pretrained models but the container always stops right after being launched regardless of what arguments are given. I also did a ‘tlt classification run ls’ to make sure my paths are mounting correctly and everything lines up in my spec file and my tlt command. Also tried giving a path to the the --log_file argument but I get the same output of the docker container stopping to stdout and no file writes out. Hope you can help, let me know if you need any other info from me!

Comand Run:
tlt classification train -e /workspace/experiments/configs/resnet18.txt -k tlt_encode -r /workspace/experiments/output/3152021 --gpus 1

Output:
2021-03-16 23:23:24,444 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

To narrow down, can you login the docker

docker run --runtime=nvidia -it -v <yourfolder>:/workspace/tlt-experiments nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3 /bin/bash

Then run

classification train -e /workspace/experiments/configs/resnet18.txt -k tlt_encode -r /workspace/experiments/output/3152021 --gpus 1

After loging into docker and running the classification train command it output:
Illegal instruction (core dumped)
Then I tried running a classification -h command and got the same :
Illegal instruction (core dumped).

In case you need it here’s my driver information from the nvidia-smi command:
±----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 208… Off | 00000000:0A:00.0 On | N/A |
| 30% 30C P8 29W / 250W | 275MiB / 11019MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

See Search results for 'Illegal instruction #intelligent-video-analytics:transfer-learning-toolkit ' - NVIDIA Developer Forums

It is related to your CPU.