Simulation / Modeling / Design

Meet the Researcher, Sam Raymond: Combining AI and HPC Simulations for Biomedical Research

‘Meet the Researcher’ is a series in which we spotlight different researchers in academia who are using GPUs to accelerate their work. This week, we spotlight Sam Raymond, a postdoctoral researcher at Stanford University.

Dr. Raymond received his PhD from the Massachusetts Institute of Technology in 2020 in the field of Computational Science and Engineering. His research focuses on the application of computational mechanics and deep learning in the areas of geomechanics, fracture mechanics, and biomedical engineering.

For his novel approach to combining numerical simulation techniques with deep learning, Sam was awarded the MIT Mikio Shoji Award for Innovation in Information Technology in 2018. Sam was recently featured in MIT News for a gift awarded to further his work on cell manipulation for organ growth by MathWorks. During the COVID crisis, working outside his regular day job, Sam helped led a team at Stanford University to build a rapid-response ventilator.  More updates on his work can be found on his research website.

What are your research areas of focus?

My work focuses on the intersection between HPC numerical simulation techniques used to model physical systems and the application of AI/Machine learning to augment and enhance those simulations. Combining these two domains allows for truly novel solutions and approaches to problems in biomedical, geomechanical, and general materials problems. Typically these problems all involve strong collaborations with industry, professionals, and other fields of research all bringing their unique area of domain expertise and perspective. Focusing on this nexus of fields and approaches has led to many exciting projects and paves the way for new approaches to old and new problems.

What motivated you to pursue this research area of focus?

I had always been fascinated by simulations, being able to create a mini digital-universe just seemed like something I wanted to build and play with. Finding ways to apply this to real-world problems and to help push technology further made this something I wanted to be in my career. With the introduction of deep learning into the computational/HPC spaces a few years ago I wanted to see how this could help our efforts to accelerate solutions and innovation, being on the cutting edge and seeing the intersections of cutting edges has been very fulfilling as a scientist, in particular in the biomedical area.

Tell us about your current research projects

My current research is focused on applications primarily in the biomedical space. We are pushing to improve upon the work that we completed recently, https://www.nature.com/articles/s41598-020-65453-8, to make a more flexible and generalized tool for biomedical chip design that should push us closer to being able to print organ tissue in the lab. I’m also focussed at the moment on brain and concussion work, understanding how we can use data from sensors like helmets and mouthguards to understand and detect concussions and traumatic brain injuries more readily. In addition the numerical modeling of real world physics, be it geophysical, manufacturing, or biomedical, https://link.springer.com/article/10.1007%2Fs40571-020-00327-4https://www.sciencedirect.com/science/article/abs/pii/S0927024819301527, is also well underway as my work looks at these new generation of HPC computational modeling techniques. 

What problems or challenges do your research address? 

In the field of biomedical research, positioning biological cells is a precarious task due to the small length scales, the required precision, and the delicate nature of the material. In the case of organ tissue growth, the position of the cells relative to one another is important to aid in the right growth pattern. However, while models exist to predict the placement of these cells under different loadings, the reverse problem, asking for the loading/boundary conditions to produce a desired positioning is not straightforward. This problem of having powerful simulators to solve a problem in a ‘forward manner’ is not specific to biomedical application, it is a common engineering challenge. In the last few years, we have seen deep learning entering the physical sciences, allowing us to use the data we can generate to infer new insights and giving us a new tool to apply to some of the problems of simulation. In particular, my work looks at the data that simulators typically discard and recycle them to build faster inference tools. This means that we can leverage the mass of computing power of HPC to empower AI to learn more about the world around us.

What is the (expected) impact of your work on the field/community/world?

Simulations of the ultrasonic fields used to position these cells carefully are currently used to understand how these cells can be positioned. However this does not address the fundamental issue of designing a device to have the particles placed in desired locations. Instead, this work consists of using these simulations, and the results that come in pairs of boundary conditions to acoustic fields. In the deep learning component of this workflow, during the training of the neural network, the target is the boundary condition and the input are the acoustic fields. Essentially this is moving backward in the simulation so that designers can use a desired answer (acoustic field) to get the required setup (boundary conditions). Combining numerical simulation and deep learning in this way allows this new design ability that allows us to tailor our biochip designs to position cells as we need them. Right now the combination of numerical simulation and deep learning is right at the cutting edge of what’s possible. We don’t know for sure exactly what this combination will lead to but already some of this work has yielded results that far outpace the existing methods of design and engineering. As this field develops over the next decade I’m certain we will see this hybrid approach applied to nearly every area of computationally intensive research. 

How have you used NVIDIA technology either in your current or previous research?

For many of the numerical modeling tools and deep learning applications that my work has required, NVIDIA GPU computing power, especially CUDA-powered HPC tools has been an essential tool. In particular the NVIDIA NGC containers have become a very valuable resource. Combining the MATLAB-container with the ability to run in the cloud with NGC, this has allowed for flexible and mobile research, which in today’s climate in particular, is an invaluable tool. 

Did you achieve any breakthroughs in that research or any interesting results using NVIDIA technology?

Absolutely, without the power and speed of the NVIDIA GPUs that powered our compute-intensive work, we would never have been able to train and infer from our models the results that we’ve found in a timely manner. Pushing forward on this research will require even more compute power and I can think of no better tool to help get this work to the next level.

What’s next for your research?

Many of the projects that I am currently focussed on are proofs of concept, we are pushing the envelope to see what’s possible. The next step will be to really scale up and scale out this work and test it in the real world. Much larger datasets with richer information will be required, and this work will necessitate pushing beyond traditional local work and NVIDIA’s GPU Cloud infrastructure will likely be a key player in this development.

Any advice for new researchers?

Be open to collaborating with those far outside of your expertise, some of the best ideas I’ve heard and been a part of have come from interacting with those far outside my circles of expertise. A wider appreciation for your work in others’ context, and the reverse, almost always helps build creativity and passion. Seek out the hidden connections.

Discuss (0)

Tags