Hi, are there data transformations made in R or Spark that need to occur before being shipped to Hive? If not, then you should not need Spark to add the data in Hive, as long as all of the files have the same layout, then once you point the data store to the HDFS folder where the files line, Hive should pick them up automatically. That is more of a Hive thing than a R/Spark/sparklyr thing.
When you "upload" data to Hive from an external source, such as R or Spark, under the hood, the data is being written to parquet files, that then is presented in the data store as a "table". Hive tables are not "physical" tables, they are all mapped logically to files in Hadoop.