Back to research overview

Research — High-performance computing: algorithms and machines

Both calculations of the strange quark content of the nucleon, and explorations of new strong dynamics on the lattice, share a common feature: they are extremely computationally demanding, even by the high standards of lattice field theory. The work discussed above was only possible thanks to steady advances in numerical algorithms complementing improvements in computing hardware, and this work will become even more important in the future.

A basic ingredient of lattice calculations is repeated solution of a linear system, Ax=b, where A is a large sparse matrix known as the Dirac operator. By "large", I mean that this matrix formally has millions of rows and millions of columns. Even though the matrix is sparse (most entries are zero), this is far too much data to store in a computer's memory. We use iterative techniques such as the conjugate gradient algorithm to solve this equation for x without explicitly writing down the full matrix A. It is this iterative calculation that must be performed repeatedly to evaluate the quark-line disconnected diagrams mentioned above.

In recent years, calculations involving disconnected diagrams have benefited greatly from the development and application of multigrid algorithms that dramatically decrease the computational cost of each solve. Multigrid algorithms represent the physical system on a succession of coarser grids with smaller systems to solve, adaptively determining the best representation of the system on the coarser levels. Applied to the studies reported in arXiv:1012.0562, multigrid algorithms can reduce costs by up to an order of magnitude, which made possible a new direction of research: performing disconnected diagrams calculations that involve the light (up and down) quarks in addition to strange quarks.

On the hardware side, graphics processing units (GPUs) have produced comparable performance improvements in certain calculations. GPUs can sustain enormous rates of computation, but memory and bandwidth constraints make it difficult to apply GPU computing to many common problems. These sorts of difficulties will likely become more severe as high-performance computing continues to evolve in coming years. Cheap and rapidly improving GPUs can be an ideal testbed for developing software that will get the most out of future computers.

In addition to working on projects (discussed above) that apply GPU computing and multigrid algorithms to reduce computational burdens, I have also performed research into the development of other improved algorithms [arXiv:0906.2813], though this work is a bit too specialized to discuss in detail here. A final interesting aspect of this line of research is that it can provide an ideal entry point into the field. We often use simple models to design and test improved techniques, including the two-dimensional systems such as graphene [arXiv:0902.0045, arXiv:1101.5131]. These smaller-scale computational projects can be more tractable for beginners, while still providing significant benefits to the field as a whole.

Other research pages:



Last modified 6 September 2015

Valid HTML 5! Valid CSS
Free Software Foundation