Discrepancy when profiling GPU memory utilization

System Config:
NVIDIA TITAN XP. Cuda compilation tools, release 9.0, V9.0.176

I ran two experiments -
[1] I ran a CNN & RNN model (with MNIST) using the Pytorch deep-learning framework.
[2] I ran the same CNN/RNN model (with MNIST) using CUDA.

I used the nvidia-smi to query memory utilization as, nvidia-smi --query-gpu=utilization.gpu,utilization.memory.

Observations,
[1] shows 3% to 4% memory utilization.
[2] shows 0% memory utilization.

Can someone help me understand this discrepancy? Do I need to turn on any flags when doing [2] to get accurate memory utilization metric, please?