Both are viable options but #1 leverages more of the existing example code. I have personally seen this done in a production environment where daily logs are pulled from elasticsearch and saved to files similar to our Azure and Duo example data. You could then feed them into our example DFP pipeline using the MultiFileSource stage as you mentioned. Some things to keep in mind:
Make sure you have source/preprocess schemas to match your data. Here’s an example.
File names have timestamps used to batch the source data by time period (default is day). For example: AZUREAD_2022-08-01T00_03_56.207Z.json