Setting Up Remote Development Using Custom Template On Runpod

Introduction

Runpod is a popular cloud computing platform that provides on-demand GPU instances, from the older Ampere GPUs to the later and the latest Hopper and Blackwell GPUs. It is not only useful for deploying services in large scale, but also a great platform for remote development using GPUs.

In this blog post, I would like to share how to set up remote development environment using custom template (custom Docker image) on Runpod and some of its caveats.

Configure SSH Access Using Custom Template

SSH access has to be configured in the custom template in order for IDE to access the RunPod pod instance. The procedures have been formally documented in the Runpod documentation “Connect to a Pod with SSH”.

Create New Template

New custom templates can be created in user’s RunPod Console under the “Manage/My Templates” tab.

Our final custom template that allows SSH and IDE access would look like this in the following figure.

Custom Template

Docker Image

The Docker images I commonly used are usually prepared by NVIDIA and they are hosted on NVIDIA NGC. In this example, more specifically, I am using the CUDA Ubuntu 24.04 development image nvcr.io/nvidia/cuda:12.6.3-devel-ubuntu24.04. The users are free to choose any Debian/Ubuntu based Docker image that fits their needs. But please do note that Runpod platforms might not always have the latest GPU drivers to support all the latest NVIDIA Docker images that run the latest version of CUDA runtime.

TCP Port

The TCP port for SSH access is 22 and it must be turned on in the custom template.

SSH Daemon

Most custom templates do not have SSH daemon installed and configured. We could configure all of these in “Container Start Command” section of the custom template.

1
bash -c 'apt update;DEBIAN_FRONTEND=noninteractive apt-get install openssh-server -y;mkdir -p ~/.ssh;cd $_;chmod 700 ~/.ssh;echo "$PUBLIC_KEY" >> authorized_keys;chmod 700 authorized_keys;service ssh start;sleep infinity'

If we already have a custom start command, replace sleep infinity at the end of our command with the previous one.

If the SSH daemon is not configured properly, we would not be able to access the RunPod instance via SSH.

Note that because this run a couple of apt commands after the container starts, the pod instance will take a bit longer to be SSH accessible after the pod instance is launched.

Access RunPod Instance Via IDE

Once the custom template is created, we can create a new pod instance using the template. The pod instance should be accessible via SSH and IDEs such as VSCode and Cursor. The procedures have been formally documented in the Runpod documentation “Connect to a Pod with VSCode or Cursor”.

Configure SSH Keys On Local Machine

The SSH key pair has to be generated on our local machine and added to our Runpod account. This step is somewhat straightforward for users who have been using SSH and Git. A more detailed description can be found in the “Generate an SSH key” section.

Configure Pod Deployment

We could select GPU instance and launch the pod using the custom template we created. One key difference from the Runpod official template is that there is no “SSH Terminal Access” option in the deployment configurations. But this will not prevent us from accessing the pod instance via SSH and IDEs.

Custom Template Deployment Configuration Runpod Official Template Deployment Configuration

SSH Access To Pod Instance

Once the pod instance has been successfully launched, we could find the pod access information easily.

Pod Access Information

In our case, the SSH over exposed TCP information is what we need for SSH and IDE access.

We could just use the SSH command provided in the SSH over exposed TCP section to access the pod instance terminal via SSH.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$ ssh root@213.173.108.216 -p 15570 -i ~/.ssh/id_ed25519
The authenticity of host '[213.173.108.216]:15570 ([213.173.108.216]:15570)' can't be established.
ED25519 key fingerprint is SHA256:UgkHmpjsFMZ1RYx3wdDrxr74tlFyGDnVrX5z6KSOxRs.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[213.173.108.216]:15570' (ED25519) to the list of known hosts.
Welcome to Ubuntu 24.04.3 LTS (GNU/Linux 6.8.0-60-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@aef7eede715c:~# nvidia-smi
Tue Oct 7 23:35:11 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX A4500 On | 00000000:C1:00.0 Off | Off |
| 30% 24C P8 14W / 200W | 1MiB / 20470MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
root@aef7eede715c:~#

IDE Access To Pod Instance

It should be noted that running Remote-SSH: Connect to Host via user@host in VSCode or Cursor will not work. We have to configure the SSH access for IDE first by strictly following the Step 4 and 5 in the “Connect to a Pod with VSCode or Cursor” Runpod documentation.

In IDEs, we will just have to open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P) and choose Remote-SSH: Connect to Host, then select Add New SSH Host. Enter the copied SSH command ssh root@213.173.108.216 -p 15570 -i ~/.ssh/id_ed25519 and press Enter. The IDE will parse the SSH command and add a new entry to the SSH config file.

To verify the IDE access and CUDA functionality, we created a new CUDA program file test.cu via IDE, compiled it using nvcc and ran the program on the RunPod instance.

test.cu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#include <cassert>
#include <iostream>
#include <cuda_runtime.h>

#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)
void check(cudaError_t err, char const* const func, char const* const file, int line)
{
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << " " << func << std::endl;
std::exit(EXIT_FAILURE);
}
}

#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)
void checkLast(char const* const file, int line)
{
cudaError_t const err{cudaGetLastError()};
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << std::endl;
std::exit(EXIT_FAILURE);
}
}

__global__ void kernel()
{
printf("Hello from CUDA kernel!\n");
}

int main()
{
// Launch kernel
kernel<<<1, 1>>>();
CHECK_LAST_CUDA_ERROR();
CHECK_CUDA_ERROR(cudaDeviceSynchronize());

return 0;
}
1
2
3
root@aef7eede715c:~# /usr/local/cuda-12.6/bin/nvcc test.cu -o test
root@aef7eede715c:~# ./test
Hello from CUDA kernel!

Sometimes, if the CUDA version of the platform, i.e., the driver, is lower than the CUDA version of the container, i.e., the runtime, we might encounter compile-time or run-time errors. Please make sure we check this if weird errors occur.

Runpod Referral

There is no free tier on Runpod and running everything would cost credit. However, if the first-time users sign up using my referral link, they will get some credit to try out the platform for free.

References

Setting Up Remote Development Using Custom Template On Runpod

https://leimao.github.io/blog/Setting-Up-Remote-Development-Custom-Template-Runpod/

Author

Lei Mao

Posted on

10-08-2025

Updated on

10-08-2025

Licensed under


Comments