Remote SSH to host works but the terminal does not seem to communicate with interpreter selection

Hi all! I recently started to try connecting positron through ssh to a red hat based HPC (managed through SLURM), and I had success so far, making the terminal appear in the login node of the HPC.

From the positron terminal connected to the HPC I can then request an interactive computing node, and this is also successful. In here I can open an R session, and when I ask for Sys.info() I get back the current nodename.

The problem lies with the R interpreter. For context, I have been able to start these interpreter sessions from an R installation in a conda environment, but for some reason Positron distinguishes between the regular interpreter and the conda interpreter (experimental for R) in the same location. The conda interpreter fails, the regular ("System") interpreter works! [see image below]. And it is able to run the packages I've installed in the conda virtual environment with no issues, and without having to activate the environment. I don't really understand how this works.

However, the problem appears whenever, after requesting the computing node, I try to initiate the system interpreter. For some reason, Positron loads the interpreter from the login node (according to Sys.info()), and not from the computing node set in the terminal. If someone has any advice on how to tell Positron to look for interpreters while using the compute node requested in the positron bash terminak, I'd really appreciate it.

Here is the current positron info

Positron Version: 2026.02.1 (system setup) build 5
Code - OSS Version: 1.106.0
Commit: 693b6d13ba5d61566bec7f5a4a46126eff7bbbe1
Date: 2026-02-10T20:21:21.336Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Windows_NT x64 10.0.19045

1 Like

Can you say more about how you are selecting a compute node in the terminal and starting an R session there? e.g. what commands do you use, exactly?

Anything you set or do in a Positron terminal is, in most cases, only going to affect that specific terminal. If you want the kernels to run in compute nodes, you will probably need to customize startup somewhat.

Some related features (does either sound like what you need)?

Thanks for your quick reply Jonathan.

Our cluster uses slurm, so after login into it I run:
srun -t 1:00:00 -p interact -n 1 --cpus-per-task=1 --mem=1g --pty /bin/bash

This changes the terminal prompt from
[user@cluster-loginnode]$
to
[user@computenode]$
So the terminal is running in the computenode but the positron R session keeps running on the cluster-loginnode

Overall, the SSH connection to the HPC works fine in Positron, but as you suggest my issue could be with the Kernel not running in the compute node but on the login node. You have any suggestions to try to change this behavior?

Of the two links that you provided, I think the second one may relate the most to my issue.

Thanks, it's much clearer now! Yes, the second issue captures what you need here which is the ability to run kernels/sessions on a different machine than the main Positron system. Positron currently does not do this, though most of the machinery is in place.

Do you want to run each individual R or Python session with srun, or run all sessions in the same srun job?

Is there any chance you can run Positron itself on the compute node?

This is our solution to this problem: GitHub - rnabioco/remote-ssh-positron: Set up Alpine to use Positron · GitHub

2 Likes

Thank you both!

  1. To answer Jonathan: It is great to know that the pieces are there, and I hope development moves forward and makes this easier in the future; we all really appreciate how active the development team is. Ideally I would like to run all sessions in the same srun job, in a way that I could parallelize if needed while I develop the scripts interactively. I do not have admin powers on the cluster, but I will ask if they can install positron, though this might take a while. The cluster has Rstudio and Rstudio server configured and also available on OnDemand.

  2. To answer Jay: thank you! I will try this, hopefully by the end of today and see if it works, and if I don't find any roadblocks with permissions. I think the Positron server was already installed in my user home directory (not the cluster home) when I SSH'd to the cluster before. I will try to set it up there first.

I'll report back after that.

This worked like a charm Jay! I just applied some minimal changes to the .sh script to consider the cluster names/info we have here.

I will post if I encounter any issues later, but for now this seemed to resolve the problem.

1 Like