People of ACM - Amanda Randles

May 9, 2018

How did you initially become interested in the application of high performance computing to biomedical simulation, and what was the most surprising thing you’ve learned in developing these simulations?

I’ve always been interested in applying computation to solve biomedical problems. In my undergrad years at Duke I worked in a bioinformatics lab getting experience with the computational and experimental research. I got really excited about what we could do with computing. When I got the opportunity to join the Blue Gene project at IBM after graduating, it was my first exposure to large-scale parallel computing.  I spent time on a lot of different teams but the overriding focus was how to get key codes to scale on the hardware. I had a few projects in collaboration with the Mayo Clinic in Rochester, Minnesota where I was first able to see the impact parallel computing could really have on research questions we could address. I realized that I was passionate about the application of large-scale computing to studying biological phenomena and decided to go back to graduate school. I was fortunate to get the opportunity to join Efthimios Kaxiras’s lab at Harvard. He is a world renowned expert in multiscale modeling. While in his lab, I learned how to write multiphysics codes from the ground up to leverage large-scale supercomputers. He was starting a project looking at multiscale models of bloodflow and I was able to join that team.

How have recent advances in the capacity of high performance computers shaped the development of HARVEY?

We are trying to push the boundaries of what can be simulated with HARVEY and always planning for the next systems.  Lately, we have been investigating optimal uses of heterogeneous architectures such as the GPU-based systems and ways to maximize our threading capabilities.  Current supercomputers typically have limited memory-per-node, and capturing large regions of the circulatory system at subcellular resolution is extremely memory-intense.  We have focused a lot of our energy on minimizing the memory footprint and finding ways to initialize the code in a completely distributed, memory-light manner.

High performance computing simulations are often used for research in the natural sciences. What are the keys for effective interdisciplinary collaboration between computer and natural scientists?

Our work necessitates interdisciplinary collaboration. We work closely with visualization experts, practicing clinicians, mathematicians, physicists, biomedical engineers, and computer scientists. It is important to find a common language and start with the goals of the collaboration. Our most successful collaborations are with researchers excited to learn about different techniques. When both sides are excited and open to learning about the other’s domain, we are not only able to achieve our goals but often take the project in new directions we couldn’t have thought of on our own. Openness and patience have played a key role in these interactions. Our closest collaborations have taught us a lot on both ends.

For instance, working with Jane Leopold, an interventional cardiologist at Brigham and Women’s Hospital, we have spent time in the cath lab seeing how procedures are completed and what goes into her process for treatment planning. She has learned a lot about supercomputers and fluid dynamics simulation, and is even now a LaTeX convert. It’s been important to spend time discussing what we each bring to the table and finding common ground.  For new collaborations, we spend time reading about their work and finding tangible overlaps before beginning the conversations. Visual tools to convey our models have been key to bringing new collaborators up to speed.

Can you tell us about what you’re working on now?

My lab is currently working on developing and scaling new fluid-structure-interaction capabilities within HARVEY.  We are improving load balance of the code, optimizing performance on GPU-based architectures, and introducing new boundary conditions to improve numerical stability in a wide range of flow conditions.  A large portion of my lab is also focusing on the application of HARVEY to studying different diseases, from atherosclerosis to peripheral arterial disease to cancer. We are working closely with a range of clinicians to target critical questions regarding the underlying mechanisms driving disease localization and progression.

Amanda Randles is Assistant Professor of Biomedical Engineering at Duke University. Her research in biomedical simulation and high performance computing focuses on the development of new computational tools to provide insight into the localization and development of human diseases. She developed HARVEY, a high- resolution simulation of the human circulatory system at the cellular level. Her work could help physicians choose the best treatments options and improve patient outcomes, and may aid in research discoveries to treat cancer, cardiovascular disease, and other ailments.

Randles received a BA in Physics and Computer Science from Duke University, a Master’s in Computer Science from Harvard University and PhD in Applied Physics from Harvard University. She was recently named the recipient of the 2017 ACM Grace Murray Hopper Award, which is awarded to an outstanding young computing professional on the basis of a single recent major technical or service contribution. Earlier in her career, she was a finalist for the ACM Gordon Bell Prize, and she received the ACM - IEEE CS George Michael Memorial High Performance Computing Fellowship in 2010 and 2012 and an Honorable Mention in 2009.