I have to develop a script in R that saves the vectors resulting from the transformation of several images in a data frame that will have as many rows as images (1000) and as many columns as variables (4096). The data frame will be exported to a csv file.
I have tried a loop so that it reads the images and transforms them one by one and then join them in a matrix or data frame. I'm new to this... I am lost... I share you what I have tried. Any idea or improvement?
set.seed(1234)
mypath = "C:/dataset/dataset/effusion/" #file where the images are
files <- list.files(path=mypath, pattern=".png$")
length(files)
for (i in files){
im <- readImage(mypath[i], header = FALSE)
grises <- rgb_2gray(im[i])
# Resize the image to a size of 64 x64 pixels:
resiz = resizeImage(grises[i], width = 64, height = 64, method = 'nearest')
# Convert to vector:
im.convert <- as.vector(resiz[i])
out <- as.data.frame(do.call(rbind, im.convert)) # create a data frame
}
return(out)
Here, to test my own answer I'm flying blind without representative data and the libraries being used for the functions like rgb_2gray. And the friction of reverse engineering the problem deters many who might answer.
It doesn't have to be *all the data, just representative, say 4x40 or whatever is big enough to yield a 64*64 image (whether or not it's the whole image. I'm thinking that a purrr:map will be quicker and easier than a loop.
Thank you! I will try to do it with the advice you have given me.
On the other hand, and for more information, the data I have is a folder with 500 images of 512 x 512 pixels and 3 channels in a numerical vector of 4096 values. I must transform each image to 64 x 64 and convert the matrix into a vector. Then join all the vectors of each image in a csv. which will have one row for each image and 4096 columns.
I will try to make it with purr.