A shared resource for the MIT brain research community.
The OpenMind computing cluster is operated by the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute. It provides the MIT brain research community with access to high-performance computing resources. The cluster is housed at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, MA, with a 10G link to the MIT campus at Cambridge. OpenMind is a Linux-based cluster with around 3.3 PBs of storage and 70 compute nodes, most of which are equipped with GPUs. Totally there are around 3500 CPU cores, 48 TB of RAM, and 330 GPUs, including 142 A100-80GB GPUs. It is a shared computer cluster across laboratories in BCS, supporting over 600 users and over 25 research labs. Here is a brief summary of the computing resources on OpenMind.
To get started on Openmind, navigate to this page, then login with an MIT Kerberos account.
Refer to FAQ and Best Practice.
In this tutorial, high performance computing (HPC) and the OpenMind computing cluster will be introduced. Attendees will learn HPC knowledge, basic Linux commands, and practical skills for using the cluster, such as using file systems, managing software, and submitting simple batch jobs. Hands-on examples will be demonstrated. This tutorial is designed for new users or beginners, but it is also an opportunity for experienced users to learn about HPC and OpenMind systematically.
In this tutorial, attendees will learn how to set up applications (such as MATLAB, Python, Anaconda, containers, Jupyter Notebook, R, and Julia) on OpenMind. First, Environment Module will be introduced, followed by setting up Python in Anaconda. Then the left will be focused on using Singularity Container, which is used to support most applications on Openmind. Topics include: exploring useful containers, running user applications in containers, and building user-customized containers. Tensorflow and Pytorch will be used as illustrating examples.
In this tutorial, best practices for using Openmind will be covered. The major part will be focused on many aspects of the Slurm job scheduler, including partition, QOS, preemption, priority, request for CPU, memory and GPU resources, job array, and job dependency. Practical bash shell scripting skills will be introduced along with writing batch job scripts.
MATLAB programs can be exceptionally fast if they are optimized and parallelized, and painfully slow if not. Many MATLAB programs run faster on high-performance computing (HPC) clusters than on regular laptops/desktops. This tutorial will be focused on using MATLAB on the Openmind Computing Cluster. Topics include: using MATLAB GUI on Openmind, running MATLAB programs in Slurm batch job, optimizing MATLAB codes, Parallel Computing Toolbox (implicit multithreading, parfor, spmd), running MATLAB programs on multiple CPU cores or GPU. Hopefully the skills you learn in this tutorial will help you to migrate your MATLAB work from desktop/laptop to Openmind, and thus solve bigger problems faster using MATLAB.
Examples for submitting batch jobs are provided in this section.
Contact ORCD at orcd-help-openmind@mit.edu for more information.