Learn more about Eddie, our state-of-the-art research compute cluster. The ECDF Linux Compute Cluster (Eddie) Eddie Mark 3 is the third iteration of the University's compute cluster and is available to all University of Edinburgh researchers. It consists of over 15000 compute cores with up to 2 TB of memory available on a single compute node. Research groups can take advantage of priority compute and guaranteed throughput for their projects by requesting an allocation in the priority compute tier. More Information on the Eddie Compute Cluster More Information on Priority Compute Document EDDIE Privacy Statement (78.07 KB / PDF) Document HPC Storage Privacy Statement (78.17 KB / PDF) Why Use Eddie? Eddie can cut the time taken to compute problems by running the software in parallel, or by breaking the problem into a number of more easily addressed sub-tasks, each of which can be run on a separate cpu in parallel. Examples of the speed improvements include speeding up the processing of brain scans from a Schizophrenia study by a factor of 400 (28hrs compared to more than 1 year {469 days}). A protein structure prediction study which involved 810,000 simulations and used 1.5 CPU years of computation was completed in less than 2 days. Researchers can also get help in understanding their storage and compute requirements, how these can be most effectively provided, and support in developing proposals including these requirements. If you’d like to explore how Eddie can transform your research, please request a consultation with our team by contacting the IS Helpline. Contacting IS Helpline GPGPU Acceleration GPGPU stands for General-Purpose computation on Graphics Processing Units. Currently we have 7 compute nodes, equipped with a total of 28 NVIDIA A100 devices. These support the CUDA toolkit for GPGPUs written by NVIDIA. GPU Hardware description and Cuda Quick Start page Symmetric Multiprocessing (SMP) and Large Memory Systems Large memory jobs and shared memory programs using methods such as OpenMP (Open Multi-Processing) can make use of a range of memory offerings per node. We currently have compute nodes ranging from 384GB to 2TB RAM. More Information about our Memory Provisioning This article was published on 2024-10-08