I built a Jetson-based demo that classifies chest X-ray images into NORMAL vs PNEUMONIA.
- Device: Jetson Nano (JetPack 4.6 / L4T r32.7.1)
- Data: Kaggle chest radiograph images (pneumonia & normal)
Source: Chest Radiograph Images (Pneumonia & Normal) | Kaggle - Model: ResNet-18 (ImageNet pretrained → fine-tuned to 2 classes)
- Acceleration: PyTorch → torch2trt → TensorRT (FP16)
- UI: Flask web app for uploading an X-ray and getting prediction, probabilities, and inference time.
Repo: jetson-xray-pneumonia
Run:
sudo docker run --runtime nvidia -it --rm --network host \
--shm-size=1g \
-v /home/jongbum/nvdli-data:/nvdli-data \
-v /home/jongbum/jetson-xray-pneumonia:/workspace \
jetson-xray:pneu
cd /workspace
# 1) split dataset (train / val / test)
python3 scripts/split_dataset.py --config configs/default.yaml
# 2) train model on Jetson
python3 scripts/train.py --config configs/default.yaml
# 3) evaluate
python3 scripts/evaluate.py configs/default.yaml
# 4) convert trained model to TensorRT (FP16)
python3 scripts/convert_trt.py configs/default.yaml
# 5) start web app
PYTHONPATH=/workspace/scripts python3 app/app.py
# open in browser: http://<jetson-ip>:5000