Hi there.
I'm pretty new at deep learning (~month, into ch5 of Deep learning with R) , and am trying to prepare an image file for use in a convnet, however that image has to first be sliced and diced ahead and split into multiple smaller images. That's fine, i got that, but now I have a cimg
object, that has dimensions (height, width, samples, channels). I cannot figure out how to get this into a keras/tensor format where they want (samples, height, width, channels).
Here's a reprex.
library(imager)
set.seed(8675309)
test.img<- array(runif(2*3*10*1),c(2,3,10,1))
test.img<- as.cimg(test.img)
dim(test.img)
[1] 2 3 10 1
So I have 10 grayscale images, each of 2x3 height.
Calling keras::image_to_array(test.img)
results in the following error:
Error in py_call_impl(callable, dots$args, dots$keywords) : ValueError: Unsupported image shape: (2, 3, 10, 1)
Most examples read the image in from an external image file. I am looking at tens to hundreds of thousands of images. so I'd really prefer to skip the part where i write the cimg to an image file disk, then read it back in.
Any ideas/thoughts?