Quest Specifications
This page contains technical information about the Quest cluster.
Northwestern regularly invests in Quest to refresh and expand computing resources to meet the needs of the research community. This includes hundreds of nodes that are available free of charge through the General Access proposal process (see below). For more information on purchasing Buy-In nodes, please see Purchasing Resources on Quest.
Quest Architecture
Quest has an IBM GPFS parallel filesystem with ESS storage totaling approximately 8.0 petabytes. Users have access to a small (80GB) home directory, as well as a project directory optimized for high-performance computing operations.
Quest comprises four login nodes that users connect to directly and 1335 compute nodes with a total of 84,622 cores used for scheduled jobs (please note Quest 9 will retire in March 2025, resulting in a total of 1239 nodes). These nodes include 93 GPU nodes and 25 high-memory nodes. Both the login and compute nodes are running the Red Hat Enterprise Linux 7.9 operating system.
Regular Compute Nodes
Quest 9 - Intel Cascade Lake (Expected Retirement in Spring 2025)- Number of Nodes: 96 nodes with 3840 cores total, 40 cores per node
- Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
- Memory: Per node (Per Core) 192 GB (4.8 GB), Type: DDR4 2933 MHz
- Interconnect: Infiniband EDR
- Number of Nodes: 555 nodes with 28860 cores total, 52 cores per node
- Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
- Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2933MHz
- Interconnect: Infiniband EDR
- Number of Nodes: 209 nodes with 13376 cores total, 64 cores per node
- Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz
- Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 3200 MHz
- Interconnect: Infiniband HDR Compatible
- Number of Nodes: 212 nodes with 13568 cores total, 64 cores per node
- Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz
- Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 3200 MHz
- Interconnect: Infiniband HDR
Quest 13 - Intel Emerald Rapids (Expected Availability in Spring 2025)
- Number of Nodes: 142 nodes with 18176 cores total, 128 cores per node
- Processor: Intel(R) Xeon(R) Platinum 8592+ CPU @ 1.9GHz
- Memory: Per node (Per Core) 512 GB (4 GB), Type: DDR4 5600 MHz
- Interconnect: Infiniband HDR
GPU Nodes
Quest has a total of 273 GPU cards on 93 GPU nodes across General Access and Buy-In allocations. We provide details on the General Access GPU nodes below. For more information on how to use the GPUs on Quest, see GPUs on Quest.
Quest 10 - Intel Cascade Lake with NVIDIA A100 GPUs (Expected Retirement in Fall 2026)
- Number of Nodes: 16 nodes with 832 cores total, 52 cores per node
- Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz (Cascade Lake)
- Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2933 MHz
- GPU Cards: 2 x 40GB NVIDIA A100 (Connected with PCIe)
- Interconnect: Infiniband EDR
Quest 12 - Intel Ice Lake with NVIDIA A100 GPUs (Expected Retirement in Fall 2028)
- Number of Nodes: 18 nodes with 1152 cores total, 64 cores per node
- Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz (Ice Lake)
- Memory: Per node (Per Core) 512 GB (8 GB), Type: DDR4 3200 MHz
- GPU Cards: 4 x 80GB NVIDIA A100 (Connected with SXM4 and HBM2)
- Interconnect: Infiniband HDR
Quest 13 - Intel Emerald Rapids NVIDIA H100 GPUs (Available via Pilot in November 2024)
- Number of Nodes: 24 nodes with 1536 cores total, 64 cores per node
- Processor: Intel(R) Xeon(R) Platinum 8562Y+ CPU @ 2.8GHz (Emerald Rapids)
- Memory: Per node (Per Core) 1TB (16 GB), Type: DDR5 5600 MHz
- GPU Cards: 4 x 80GB NVIDIA H100 (Connected with SXM5 and HBM3)
- Interconnect: Infiniband NDR
High-Memory Nodes
Quest has a total of 25 high-memory nodes that include 0.5 – 2 TB memory per node for scheduled jobs. This includes one node with 1.5 TB memory support General Access, and the remaining nodes support Buy-In allocations. For more information on how to run on a high-memory node, see Quest Partitions/Queues.
In addition, 3 nodes with 1.5 TB memory support Quest Analytics Nodes.
Job Limits
Researchers using Quest may submit up to 5,000 jobs at one time. General access jobs with a wall time of four hours or less can be run on most of Quest’s compute nodes and will experience the shortest wait times.
General Access Resources and Architectures
A significant amount of computing is available to everyone through the General Access proposal process. Currently, there are 541 regular nodes, 34 GPU nodes, and 1 high-memory node available exclusively for use in General Access. Furthermore, the General Access jobs can run on the majority of dedicated or Full Access Quest nodes for up to 4 hours. Researchers using General Access allocations can request appropriate partitions/queues depending on their computational needs. For instance, short/normal/long partitions can be used to access the regular nodes. The "short” queue has access to the majority of Quest nodes and all regular nodes architectures.
Genomics Compute Cluster
There are a large number of nodes and storage available for genomics research. For more details, please see the Genomics Compute Cluster on Quest.