After clean re-installs of WSL2/Ubuntu, RStudio, Python and an embarrassing number of non-productive hours, I have a perfectly operational Tensorflow-GPU environment that works in Python. Despite my best efforts, I have not been able to get Tensorflow-GPU to work in RStudio-server. In all likelihood, there is some sort of simple configuration issue, but I cannot find it. Any advice on how to proceed would be greatly appreciated. Being a long-time R user, using TF in R with RStudio seemed like a perfect development path. There is an earlier post that is broadly on this topic, but seems specifically unrelated. Hopefully, someone can point me in the right direction to resolve this lest I become, dare I say(?), a Pythonista.
DETAILS:
Windows 11
WSL2
Ubuntu 24.04
R 4.4.2
Python 3.12.3
Tensorflow 2.18
Keras3
Status:
Followed various tutorials on installing RStudio-Server and Keras/Tensorflow GPU.
RStudio-Server is working.
Keras/Tensorflow working in the console when using reticulate::source_python , but only CPU.
Generates this error, then produces the correct answer:
2025-01-20 13:41:19.102632: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
[[22. 28.]
[49. 64.]]
From the RStudio-server terminal using the "r-reticulate" virtualenv, running the same script generates this error (the same):
2025-01-20 13:46:20.135127: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
[[22. 28.]
[49. 64.]]
(r-reticulate) myron@Hessite-ML-Dev:~$
Running the script yet again from the WSL2/Ubuntu terminal it runs fine hitting the GPU
I0000 00:00:1737401238.233482 105209 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 6691 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1070 Ti, pci bus id: 0000:c1:00.0, compute capability: 6.1
[[22. 28.]
[49. 64.]]
(r-reticulate) myron@Hessite-ML-Dev:~$
This is the simple script gpu test script:
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[2,3], name='a')
b = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0], shape=[3,2], name='b')
c = tf.matmul(a,b)
print (c.numpy())
Outside RStudio, using the virtual environments created while trying to create a working version of R Keras, as well as for just Keras, in WSL2/Ubuntu, Python works, and will run Python code with Tensorflow with the GPU. Have run timing comparison code, between the CPU (i7 9800X) and GPU (GTX 1070 Ti) and GPU is approx 4-5x faster doing matrix operations (10^6 to 10^8 elements), so am confident there is a Python installation running Tensorflow GPU in the environment(s).