I am working in a Kaggle notebook and I was wondering if you can use GPU acceleration with any of the tidymodels as it can accelerate training times up to 100x compared to CPU. I know it is possible for the 'xgboost' package (link) but I can't get it to work using the tidymodels interface if I for instance try:
xg_model <-
boost_tree(
# trees = 1000,
# tree_depth = tune(), min_n = tune(),
# loss_reduction = tune(), ## first three: model complexity
# sample_size = tune(), mtry = tune(), ## randomness
# learn_rate = tune(), ## step size
) %>%
set_engine("xgboost", tree_method = 'gpu_hist') %>%
set_mode("classification")