The OpenMind Computing Cluster

A shared resource for the MIT brain research community.


The OpenMind computing cluster is operated by the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute. It provides the MIT brain research community with access to high-performance computing resources. The cluster is housed at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, MA, with a 10G link to the MIT campus at Cambridge. OpenMind is a Linux-based cluster with around 3.3 PBs of storage and 65 compute nodes, most of which are equipped with GPUs. Totally there are around 3500 CPU cores, 48 TBs of RAMs, and 330 GPUs, including 142 A100-80GB GPUs. It is a shared computer cluster across laboratories in BCS, supporting over 600 users and over 25 research groups. Here is a brief summary of the computing resources on OpenMind.

  • 1 Head node : 16 CPU cores, 128 GB RAM
  • 61 GPU Compute nodes : 20, 24, 40, 48, 96, 128 CPU cores (2 hyperthreads on each core), 256, 512, 770, 1000, 2000 GB RAMs, and 2-8 GPUs per node.
  • 2 DGX-1 Compute nodes : 8 V100 GPUs on each, with 8-way NVLINK.
  • 2 DGX-A100 Compute nodes : 8 A100 GPUs on each, with 8-way NVLINK.
  • Weka storage : 900 TB.
  • Vast storage : 675 TB.
  • Lustre storage : 450 TB.
  • Normal NFS storage : 1300 TB.

Getting Started #back to top

To get started on Openmind, navigate to this page, then login with an MIT Kerberos account.

Refer to FAQ and Best Practice.

Tutorials #back to top

OpenMind Tutorial I: An Introduction

In this tutorial, high performance computing (HPC) and the OpenMind computing cluster will be introduced. Attendees will learn HPC knowledge, basic Linux commands, and practical skills for using the cluster, such as using file systems, managing software, and submitting simple batch jobs. Hands-on examples will be demonstrated. This tutorial is designed for new users or beginners, but it is also an opportunity for experienced users to learn about HPC and OpenMind systematically.

OpenMind Tutorial II: Applications and Containers

In this tutorial, attendees will learn how to set up applications (such as MATLAB, Python, Anaconda, containers, Jupyter Notebook, R, and Julia) on OpenMind. First, Environment Module will be introduced, followed by setting up Python in Anaconda. Then the left will be focused on using Singularity Container, which is used to support most applications on Openmind. Topics include: exploring useful containers, running user applications in containers, and building user-customized containers. Tensorflow and Pytorch will be used as illustrating examples.

OpenMind Tutorial III: Slurm Job Scheduler and Best Practices

In this tutorial, best practices for using Openmind will be covered. The major part will be focused on many aspects of the Slurm job scheduler, including partition, QOS, preemption, priority, request for CPU, memory and GPU resources, job array, and job dependency. Practical bash shell scripting skills will be introduced along with writing batch job scripts.

MATLAB for High Performance Computing

MATLAB programs can be exceptionally fast if they are optimized and parallelized, and painfully slow if not. Many MATLAB programs run faster on high-performance computing (HPC) clusters than on regular laptops/desktops. This tutorial will be focused on using MATLAB on the Openmind Computing Cluster. Topics include: using MATLAB GUI on Openmind, running MATLAB programs in Slurm batch job, optimizing MATLAB codes, Parallel Computing Toolbox (implicit multithreading, parfor, spmd), running MATLAB programs on multiple CPU cores or GPU. Hopefully the skills you learn in this tutorial will help you to migrate your MATLAB work from desktop/laptop to Openmind, and thus solve bigger problems faster using MATLAB.

Examples #back to top

Examples for submitting batch jobs are provided in this section.

Contact #back to top

Contact Shaohao Chen at for more information.