University of Michigan Logo
Compiling HOOMD-blue

Table of contents:

Software Prerequisites

HOOMD-blue requires a number of prerequisite software packages and libraries.

Required: Python >= 2.6 numpy >= 1.7 boost >= 1.39.0 CMake >= 2.6.2 C++ Compiler (tested with gcc, clang, intel) Optional: NVIDIA CUDA Toolkit >= 5.0 MPI (tested with OpenMPI, MVAPICH, impi) Useful developer tools Git >= 1.7.0 Doxygen >= 1.8.5

See Best practices for a discussion of which kind of MPI library is best for your situation See Compiling with MPI enabled for instructions on building an MPI enabled hoomd.

Loading software prerequisites on clusters

Most cluster administrators provide versions of python, numpy, mpi, and cuda as modules. Some provide boost, and a few provide boost with a working boost::python. Here are the module commands necessary to load prerequisites at national supercomputers. Each code block also specifies a recommended install location ${SOFTWARE_ROOT} where hoomd can be loaded on the compute nodes with minimal file system impact. On many clusters, administrators will block your account without warning if you launch hoomd from $HOME.

OLCF Titan:

module unload PrgEnv-pgi
module load PrgEnv-gnu
module load cmake/2.8.11.2
module load git
module load cudatoolkit
module load python/3.4.3
module load python_numpy/1.9.2
module load boost/1.60.0
# need gcc first on the search path
module unload gcc/4.9.0
module load gcc/4.9.0
export SOFTWARE_ROOT=${PROJWORK}/${your_project}/software/titan

For more information, see: https://www.olcf.ornl.gov/support/system-user-guides/titan-user-guide/

OLCF Eos:

module unload PrgEnv-intel
module load PrgEnv-gnu
module load cmake
module load git
module load python/3.4.3
module load python_numpy/1.9.2
module load boost/1.60.0
export SOFTWARE_ROOT=${PROJWORK}/${your_project}/software/eos
# need gcc first on the search path
module unload gcc/4.9.0
module load gcc/4.9.0
export CC="cc -dynamic"
export CXX="CC -dynamic"

For more information, see: https://www.olcf.ornl.gov/support/system-user-guides/eos-user-guide/

XSEDE SDSC Comet:

module purge
module load python
module unload intel
module load intel/2015.2.164
module load mvapich2_ib
module load gnutools
module load scipy
module load cmake
module load cuda/7.0
# module load boost/1.55.0
export CC=`which icc`
export CXX=`which icpc`
export SOFTWARE_ROOT=/oasis/projects/nsf/${your_project}/${USER}/software

Comet's boost module exists and has boost::python, but it is non-functional. You need to build boost: see Building boost on clusters.

Note: The python module on comet provides both python2. You need to force hoomd to build against python2: cmake $HOME/devel/hoomd -DPYTHON_EXECUTABLE=which python2`

Note: CUDA libraries are only available on GPU nodes on Comet. To run on the CPU-only nodes, you must build hoomd with ENABLE_CUDA=off.

Note: Make sure to set CC and CXX. Without these, cmake will use /usr/bin/gcc and compilation will fail.

For more information, see: http://www.sdsc.edu/support/user_guides/comet.html

XSEDE SDSC Stampede:

module unload mvapich
module load intel/15.0.2
module load impi
module load cuda/7.0
module load cmake
module load git
module load python/2.7.9
export CC=`which icc`
export CXX=`which icpc`
export $SOFTWARE_ROOT=${WORK}/software

Stampede's boost module does not include boost::python, you need to build boost: see Building boost on clusters.

Note: Stampede admins highly recommend building with the intel compiler and MPI libraries. They attribute random crashes to the mvapich library and GNU compiler.

Note: CUDA libraries are only available on GPU nodes on Stampede. To run on the CPU-only nodes, you must build hoomd with ENABLE_CUDA=off.

Note: Make sure to set CC and CXX. Without these, cmake will use /usr/bin/gcc and compilation will fail.

For more information, see: https://portal.tacc.utexas.edu/user-guides/stampede

Building boost on clusters

Not all clusters have a functioning boost::python library. On these systems, you will need to build your own boost library. Download and unpack the latest version of the boost source code.

Then run the following in the shell. The variables are set for Comet, you will need to change the python version and root directory to match your cluster.

PREFIX="${SOFTWARE_ROOT}/boost"
PY_VER="2.7"
PYTHON="/opt/python/bin/python2.7"
PYTHON_ROOT="/opt/python"
./bootstrap.sh \
--prefix="${PREFIX}" \
--with-python="${PYTHON}" \
--with-python-root="${PYTHON_ROOT} : ${PYTHON_ROOT}/include/python${PY_VER}m ${PYTHON_ROOT}/include/python${PY_VER}"
./b2 -q \
--ignore-site-config \
variant=release \
architecture=x86 \
debug-symbols=off \
threading=multi \
runtime-link=shared \
link=shared \
toolset=gcc \
python="${PY_VER}" \
--layout=system \
-j20 \
install

Then set BOOST_ROOT=${SOFTWARE_ROOT}/boost before running cmake.

Installing prerequisites on a workstation

On your workstation, use your systems package manager to install all of the prerequisite libraries. Some linux distributions separate -dev and normal packages, you need the development packages to build hoomd. Also, many linux distributions ship both python2 and python3, but only build boost against python2. On such systems, you need to force hoomd to build against python2. Check the hoomd-users mailing lists for posts by users who share their hoomd build instructions on a variety of distributions.


Compile HOOMD-blue

Clone the git repository to get the source:

$ git clone https://bitbucket.org/glotzer/hoomd-blue

By default, the maint branch will be checked out. This branch includes all bug fixes since the last stable release.

Compile:

$ cd hoomd-blue
$ mkdir build
$ cd build
$ cmake ../ -DCMAKE_INSTALL_PREFIX=${SOFTWARE_ROOT}/hoomd
$ make -j20

Note: for development, you do not need to make install. make builds a functioning hoomd install: launch python-runner/hoomd.

Run:

$ make test

to test your build

To install a stable version for general use, run make install and add ${SOFTWARE_ROOT}/hoomd/bin to your PATH.


Compiling with MPI enabled

System provided MPI

If your cluster administrator provides an installation of MPI, you need to figure out if is in your $PATH. If the command

$ which mpicc
/usr/bin/mpicc

succeeds, you're all set. HOOMD-blue should detect your MPI compiler automatically.

If this is not the case, set the MPI_HOME environment variable to the location of the MPI installation.

$ echo ${MPI_HOME}
/home/software/rhel5/openmpi-1.4.2/gcc

Build hoomd

Configure and build HOOMD-blue as normal (see Compile HOOMD-blue). During the cmake step, MPI should be detected and enabled. For cuda-aware MPI, additionally supply the ENABLE_MPI_CUDA=ON option to cmake.


Build options

Here is a list of all the build options that can be changed by CMake. To changes these settings, cd to your build directory and run

$ ccmake .

After changing an option, press c to configure then press g to generate. The makefile/IDE project is now updated with the newly selected options. Alternately, you can set these parameters on the initial cmake invocation: cmake $HOME/devel/hoomd -DENABLE_CUDA=off

Options that specify library versions only take effect on a clean invocation of cmake. To set these options, first remove CMakeCache.txt and then run cmake and specify these options on the command line.

  • PYTHON_EXECUTABLE - Specify python to build against. Example: /usr/bin/python2
  • BOOST_ROOT - Specify root directory to search for boost. Example: /sw/rhel7/boost-1.60.0

Other option changes take effect at any time. These can be set from within ccmake or on the command line.

  • BUILD_TESTING - Enables the compilation of unit tests
  • CMAKE_BUILD_TYPE - sets the build type (case sensitive)
    • Debug - Compiles debug information into the library and executables. Enables asserts to check for programming mistakes. HOOMD-blue will run very slow if compiled in Debug mode, but problems are easier to identify.
    • RelWithDebInfo - Compiles with optimizations and debug symbols. Useful for profiling benchmarks.
    • Release - All compiler optimizations are enabled and asserts are removed. Recommended for production builds: required for any benchmarking.
  • ENABLE_CUDA - Enable compiling of the GPU accelerated computations using CUDA. Defaults on if the CUDA toolkit is found. Defaults off if the CUDA toolkit is not found.
  • ENABLE_DOXYGEN - enables the generation of user and developer documentation (Defaults off)
  • SINGLE_PRECISION - Controls precision
    • When set to ON, all calculations are performed in single precision.
    • When set to OFF, all calculations are performed in double precision.
  • ENABLE_MPI - Enable multi-processor/GPU simulations using MPI
    • When set to ON (default if any MPI library is found automatically by CMake), multi-GPU simulations are supported
    • When set to OFF, HOOMD always runs in single-GPU mode
  • ENABLE_MPI_CUDA - Enable CUDA-aware MPI library support
    • Requires a MPI library with CUDA support to be installed
    • When set to ON (default if a CUDA-aware MPI library is detected), HOOMD-blue will make use of the capability of the MPI library to accelerate CUDA-buffer transfers
    • When set to OFF, standard MPI calls will be used
    • Warning: Manually setting this feature to ON when the MPI library does not support CUDA may result in a crash of HOOMD-blue

These options control CUDA compilation:

  • CUDA_ARCH_LIST - A semicolon separated list of GPU architecture to compile in. Portions of HOOMD are optimized for specific hardware architectures, but those optimizations are only activated when they are compiled in. By default, all known architectures supported by the installed CUDA toolkit are activated in the list. There is no disadvantage to doing so, except perhaps a slightly larger executable size and compile times. The CUDA programming guide contains list of which GPUs are which compute version in Appendix A. Note: nvcc does not treat sm_21 differently from sm_20. 21 should not be added to CUDA_ARCH_LIST.
  • NVCC_FLAGS - Allows additional flags to be passed to nvcc.

Building a plugin for HOOMD-blue

There are several methods that can be used to build code that interfaces with hoomd.

Method 1: Write a full-fledged plugin in python only

Some plugins can be implemented fully in python, providing high-level code for configuring or running simulations.

In order to use such a plugin, one must first:

  1. Compile hoomd normally
  2. make install hoomd to a desired location
  3. Add hoomd_install_location/bin to your PATH as usual

Create a directory to contain the python module for the plugin:

cd hoomd_install_location/lib/hoomd/python-module/hoomd_plugins
mkdir plugin_name
cd plugin_name
touch __init__.py

You should develop your plugin in a directory outside hoomd_install_location and using a revision control software. You would not want to loose the code you've written when hoomd is uninstalled! In this case, you can just copy the module to the hoomd-plugins directory to install it.

cp -R plugin_name hoomd_install_location/lib/hoomd/python-module/hoomd_plugins

Once the plugin is written and installed, it can be used in a hoomd script like so:

from hoomd_script import *
from hoomd_plugins import plugin_name
init.whatever(...)
plugin_name.whatever(...)

Method 2: Write a full-fledged plugin with C++ code included

For high performance, execution on the GPU, or other reasons, part of a plugin can be written in C++. To write a plugin that incorporates such code, make install hoomd as normal. Then copy the directory hoomd_install_location/share/hoomd/plugin_template_cpp to a new working space and modify it to implement your plugin. See the README file in that directory for full documentation. Examples of new pair and bond potentials are available in hoomd_install_location/share/hoomd/plugin_template_evaluators_ext