Computational Server Laboratory
The Laboratory was established in 2019 to provide High-Performance-Computing (HPC) state-of-the-art for neuroscience and brain mapping researches.
The brain is the most complex computational system in the universe. In recent years, computational neuroscientists devoted great efforts to discover brain mechanisms in two approaches: highly neural simulations and analysis of massive neural data. Both of them depend on the quality of available software and hardware.
HPC brings techniques that help users efficiently manage the hardware to answer their questions and significantly improves computing performance. Nowadays, HPC is an unavoidable part of the computational research of every field, however, were are at the beginning point of the route of employing HPC in neuroscience and brain mapping.
In our lab, we focus on developing HPC in neuroscience and brain mapping researches. Powerful hardware and up-to-date softwares of our computational servers prepare parallel processing and storing big neural data. Our servers contain the latest Linux operating systems, useful image and signal processing software, and neuro-simulation packages.
We try to be a good host for users' data and processes in a secure environment. Users could also connect to their accounts remotely in a user-friendly interface from the internet.
- Server maintenance
- Updating operating systems and applications
- Consult and support of users
- Keeping the accounts safe
- Controlling access limits
- Providing new requirements
- Facilitating remote access of users to their accounts
- Setting up new processing pipelines
- Running related educational and advertising programs
- Giving computational services to other laboratories
|Structural MRI||Diffusion MRI||Functional MRI||Signal Analysis||Neuro simulation||Psychophysics||Data Analysis||IDE||Viewer|
National Brain Mapping Laboratory GPU Services
GPUs leverage Single Instruction Multiple Data (SIMD) architecture in Flynn's taxonomy. This type of hardware is capable of doing floating-point arithmetic at high rates. This unique feature can be used in matrix operation, which we need in machine learning. As one example, NVIDIA builds GPU accelerators that can do far more than graphics. Nowadays, we call this type of hardware GPGPU (General-purpose computing on graphics processing units). NVIDIA also build vendor-specific programming language called CUDA which only works on their own GPU. This vendor also builds and support optimized mathematics and machine learning libraries which use in all sort of software like Tensorflow.
Last year, NBML bought NVIDIA RTX 8000, and we add this hardware to our infrastructure. Now we can provide GPU services to our researchers.