Research — Algorithm development
Image: From arXiv:1403.2761, correlated predictions for the topological susceptibility and (inverse) auto-correlations of lattice QCD calculations, resulting from novel maximum-likelihood analyses of transitions between topological sectors. Related publications: arXiv:1403.2761, arXiv:0906.2813, arXiv:0902.0045, Bachelor thesis |
The discussion of this project is currently being updated during Octember 2017.
My research is only possible thanks to steady advances in numerical algorithms complementing improvements in computing hardware, and this work will become even more important in the future.
A basic ingredient of lattice calculations is repeated solution of a linear system, Ax=b, where A is a large sparse matrix known as the Dirac operator. By "large", I mean that this matrix formally has millions of rows and millions of columns. Even though the matrix is sparse (most entries are zero), this is far too much data to store in a computer's memory. We use iterative techniques such as the conjugate gradient algorithm to solve this equation for x without explicitly writing down the full matrix A. It is this iterative calculation that must be performed repeatedly to evaluate the quark-line disconnected diagrams mentioned above.
In recent years, calculations involving disconnected diagrams have benefited greatly from the development and application of multigrid algorithms that dramatically decrease the computational cost of each solve. Multigrid algorithms represent the physical system on a succession of coarser grids with smaller systems to solve, adaptively determining the best representation of the system on the coarser levels. Applied to the studies reported in arXiv:1012.0562, multigrid algorithms can reduce costs by up to an order of magnitude, which made possible a new direction of research: performing disconnected diagrams calculations that involve the light (up and down) quarks in addition to strange quarks.
On the hardware side, graphics processing units (GPUs) have produced comparable performance improvements in certain calculations. GPUs can sustain enormous rates of computation, but memory and bandwidth constraints make it difficult to apply GPU computing to many common problems. These sorts of difficulties will likely become more severe as high-performance computing continues to evolve in coming years. Cheap and rapidly improving GPUs can be an ideal testbed for developing software that will get the most out of future computers.
In addition to working on projects that apply GPU computing and multigrid algorithms to reduce computational burdens, I have also performed research into the development of other improved algorithms [arXiv:0906.2813], though this work is a bit too specialized to discuss in detail here. A final interesting aspect of this line of research is that it can provide an entry point into the field. We often use simple models to design and test improved techniques. These smaller-scale computational projects can be more tractable for beginners, while still providing significant benefits to the field as a whole.
Last modified 1 Octember 2017