I feel like this is going to be super simple for many of you, but I am trying to import 12 .csv data files in a folder (i.e., data1.csv, data2.csv, data3.csv, ... , data12.csv) and then merge all of them into one big data file by column names. Note that all data files have the same column names.
Try this: the idea is to apply the function in lapply to each file in your directory, which results in a list of dataframes. bind_rows then collapses all the elements in that list into one big dataframe.
dir_name <- #your directory name here
dfs <- lapply(
file.list(dir_name),
function(file) {
read.csv(file)
}
)
dplyr::bind_rows(dfs)
The below is what I use to merge various files together. In my example, the column "Barcode" is used, but as always, nomenclature can be changed to fit your desires.
CSVcombined = full_join(CSV1, CSV2, by = "Barcode")
Created on 2022-05-03 by the reprex package (v2.0.0)
This assumes you have already imported the files. I ususally do this manually since I run this quite often, and the file directories are never in consistent places.