Running the code above, I get AIC, AICc, BIC, etc. by lagged predictor model by .id. I am wondering if it's possible to pull out these metrics by model by only the model without using group_by() and summarize().
When using cross validation, you are estimating many models.
As a result, you will receive set of summary statistics (AIC, AICc, BIC, etc.) per model. If you were to combine them using group_by() and summarise(), you would be combining summary information from models with different response data - this doesn't make too much sense.
If you wanted to compare the performance of each of the models using cross-validation, you can use out-of-sample accuracy measures using accuracy(). Examples of using cross-validated accuracy evaluation can be found at https://otexts.com/fpp3/tscv.html
OK I see. Is there a way for me to get AICs for the 3 models I fitted? I am just trying to determine the optimal lag for the predictor.
In Prof. Hyndman's fpp3 book under "Lagged Predictors", he uses the same idea but on a trained set (not cross validated) so he's able to get the AIC, BIC, etc. where I get these metrics by .id/slice.
The AIC values for a model are obtained using glance().
When you cross validate your data, you are creating several slices of your data. Modelling this will give you a model for every slice, and glancing this will give you an AIC for every model for every slice.
You can identify an appropriate lag parameter by evaluating accuracy() performance as described above. The model with the lowest errors is performing the best.