Last week I could use dplyr::db_drop_table
on sparklyr
tables on our Spark cluster. Now when I attempt it i get the following error.
library(sparklyr)
sc <- spark_connect(master = "local")
copy_to(sc, mtcars)
#> # Source: spark<mtcars> [?? x 11]
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4
#> 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4
#> 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1
#> 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1
#> 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2
#> 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1
#> 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4
#> 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2
#> 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2
#> 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4
#> # … with more rows
dplyr::db_drop_table(con = sc, table = "mtcars")
#> Error in UseMethod("db_drop_table"): no applicable method for 'db_drop_table' applied to an object of class "c('spark_connection', 'spark_shell_connection', 'DBIConnection')"
spark_disconnect(sc)
Created on 2021-03-02 by the reprex package (v0.3.0)
The thing I've done between it working last week and not working now, is updating to dbplyr
2.1.0. Reverting to dbplyr
2.0.0 doesn't fix the issue, but I'm now wondering if one of the numerous packages that were updated in the process of updating dbplyr
is causing this, but that seems... improbable? But also, like, the only explanation, kind of? I don't know.
I've done some back and forths with the versions of dbplyr, dplyr and sparklyr (with the exception that the installation of the dev version of sparklyr fails for me), but it's not doing the trick.
Anyway, just wanted to check if anyone else was having this issue? Or if anyone had some thought?
Many thanks,
Hlynur