Prototype National Research Platform
San Diego Supercomputer Center
Resource Type: Compute
User Guide: https://nrp.ai/
Recommended Use: The National Research Platform is designed to research, education, classes, and workshops, offering modern GPUs, FPGAs, and specialized hardware for advanced domain science and AI projects. Educators can use NRP in their classrooms, including GPUs, CPUs, and storage accessible through convenient interfaces like JupyterHub and Coder. Networking researchers can take advantage of the NRP nodes distributed around the world, innovative hardware like FPGAs, P4 switches, DPUs etc for unique networking experiments. High speed networking also allows users to do experiments with data distributed across the country and internationally. Researchers and educators can also deploy services on the Kubernetes cluster and take advantage of NRP staff managed services like JupyterHub, hosted LLMs with chat interfaces and API access. The NRP hosts a number of Open Science Data Federation (OSDF) origins in the cluster, that can be used to distribute read-only data, i.e. software packages or datasets. Users can place data on origins with support from NRP and OSDF staff and make datasets available.
Latitude:
Longitude:
Production Dates: 04/01/2026 -
Public URL: https://cider.access-ci.org/public/resources/RDR_003540
Description: The Prototype National Research Platform (PNRP) is a Category II NSF-funded system integrated into the Nautilus cluster operated jointly by the San Diego Supercomputer Center at UC San Diego, the Massachusetts Green High Performance Computing Center (MGHPCC) and the University of Nebraska–Lincoln (UNL). The system features a novel, extremely low-latency fabric from GigaIO that allows dynamic composition of hardware, including FPGAs, GPUs, and NVMe storage. Each of the three sites (SDSC, UNL, and MGHPCC) includes ~1 PB of usable disk space. The three storage systems function as data origins of the CDN, providing data access anywhere in the country within a round-trip delay of ~10ms via use of network caches at three sites and five Internet2 network colocation facilities. The Nautilus cluster serves the broader National Research Platform (NRP) which is a community-owned research and education platform connecting researchers and educators to foster collaboration, accelerate innovation, and share resources. The PNRP contribution to the cluster comprises of 1) a HPC subsystem at SDSC with 8 HGX A100 servers with 8 80G A100 GPUs, 512G memory, and 1TB NVMe storage per server; 32 Alveo U55C FPGAs available composed nodes via GigaIO fabric; 122TB of FabreX connected NVMe; 2) two FP32 subsystems, one each at UNL and MGHPCC, each with 18 GPU nodes with 8 A10 GPUs, 512G memory, and 8TB NVMe per node; and 3) 8 distributed data caches of 50TB each. In addition, the distributed Kubernetes cluster architecture enables other institutions to incorporate their own resources into the cluster.