Encoding Video to File and Decoding

I’ll preface this by letting you know that I’m not very experienced in video encoding/decoding. I’m trying to use HEVC encoding to compress camera data, frame-by-frame, on the GPU. I would like to compress the frames with HEVC at lossless quality, to a file which can then be decompressed on the GPU and used to reconstruct the original frames.

First off some background info:

  • Ubuntu 18.04
  • Nvidia Driver 430.50
  • Cuda 10.0
  • Video Codec 9.0.20
  • Nvidia RTX 2080Ti

I read through the samples which seemed to have very limited documentation. I saw AppEncCuda looked like it was encoding video to a bitstream which was then written to a file. I slightly modified this to accept input from my camera, but other than that it is very similar. Everything seems to work (at least no errors are thrown) and the bitstream looks as though it is successfully written to the file.

However, when I try to decode it, I get errors when running the ApDecMem example. It seems to me like this should load my encoded bitstream into memory, and decode it. However I noticed the issues are with FFMPEG Demuxing. I’m constantly getting errors that say:

[ERROR][11:49:54] General error -1094995529 at line 166 in file …/…/NvCodec/…/Utils/FFmpegDemuxer.h
[ERROR][11:49:54] No AVFormatContext provided.

It seems to me that I am not writing this file in a manner that FFmpeg can understand.

So I have a few questions:
1.) Does AppEncCuda create a multiplexed bitstream that needs to be demuxed? Is this bitstream in a format that FFmpeg can automatically demux/parse or am I missing a step?
1.1.) If FFmpeg cannot demux/parse this without additional information, what is the point of having the AppEncCuda example write a bitstream to a file? All of the AppDecode examples seem to rely on FFmpeg to demux so it seems that nothing would be able to decode it.
2.) If I want to write this to a file that can be decoded, do I need to add information to be able to parse the bitstream?
2.1) I added the VUI parameters as shown in the AppEncDec example - do I also need to add the AVFormatContext somehow?
3.) Do I need to write this to a file differently? Should I be looking to modify FFmpegStreamer to write to a file instead of a network address? Or does something like this already exist?
3.1) I noticed that FFmpegStreamer creates AVCodecParameters as part of the video stream and it looks like it is writing the header with this info with avformat_wrtie_header() - do these values need to be added?
4.) Am I just going about this wrong? Is this type of functionality built into FFmpeg already? Should I just be using the GPU enabled FFmpeg library to encode a series of images to a file?
4.1) If so, what are the NVENC samples for? Are they trying to demonstrate more complex use cases that I’m just not grasping?

Thanks in advance for any help!