George Michael HPC Fellowships
ACM/IEEE-CS George Michael Memorial HPC Fellowships
Endowed in memory of George Michael, one of the founding fathers of the SC Conference series, the ACM/IEEE-CS George Michael Memorial Fellowships honor exceptional PhD students throughout the world whose research focus areas are in high performance computing, networking, storage, and large-scale data analysis. ACM, the IEEE Computer Society, and the SC Conference support this award.
Fellowship winners are selected each year based on overall potential for research excellence, the degree to which technical interests align with those of the HPC community, academic progress to date, recommendations by their advisor and others, and a demonstration of current and anticipated use of HPC resources. The Fellowship includes a $5,000 honorarium, plus travel and registration to receive the award at the annual SC conference.
For applications info, contact Students@sighpc.org
2019 Recipients
Milinda Fernando and Staci A. Smith Named Recipients of
2019 ACM-IEEE CS George Michael Memorial HPC Fellowships
ACM announced today that Milinda Shayamal Fernando of the University of Utah and Staci A. Smith of the University of Arizona are the recipients of the 2019 ACM-IEEE CS George Michael Memorial HPC Fellowships.
Fernando is recognized for his work on high performance algorithms for applications in relativity, geosciences and computational fluid dynamics (CFD).
Smith is recognized for her work developing a novel dynamic rerouting algorithm on fat-tree interconnects. The Fellowships are jointly presented by ACM and the IEEE Computer Society.
Milinda Fernando
New discoveries in science and engineering are partially driven by simulations on high performance computers―especially when physical experiments would be unfeasible or impossible. Fernando’s research is focused on developing algorithms and computational codes that enable the effective use of modern supercomputers by scientists working in many disciplines
His key objectives include: making making computer simulations on high performance computers easy to use (by using symbolic interfaces and autonomous code generation); portable (so they can be run across different computer architectures); high-performing (because they make efficient use of computing resources); and scalable (so that they can solve larger problems on next next-generation machines).
Fernando’s work has enabled improved applications in areas of computational relativity and gravitational wave (GW) astronomy. In the universe, when two supermassive black holes merge, they bring along corresponding clouds of stars, gas and dark matter. Modeling these events requires powerful computational tools that consider all the physical effects of such a merger. While recent algorithms and codes to develop simulations of black hole mergers have been developed, they were limited because they could only handle simulations when the masses of the two black holes were comparable. Fernando developed algorithms and code for mergers of black holes, or neutron stars, of vastly different mass ratios. These computational simulations help scientists understand the early universe as well as what is going on at the heart of galaxies.
Staci Smith
A general problem in high performance computing occurs when multiple distinct jobs running on supercomputers send messages at the same time, and these messages interfere with each other. This inter-job interference can significantly degrade a computer’s performance.
Smith’s first research paper in this area, “Mitigating Inter-Job Interference Using Adaptive Flow-Aware Routing,” received a Best Student Paper nomination at SC18, the premiere supercomputing conference. Her paper had two goals: to explore the causes of network interference between jobs (in order to model that interference); and to develop a mitigation strategy to alleviate the interference.
As a result of this work, Smith recently developed a new routing algorithm for fat-tree interconnects called Adaptive Flow-Aware Routing (AFAR), which improves execution time up to 46% when compared to other default routing algorithms. As part of her ongoing PhD research, she continues to develop algorithms to improve the performance and efficiency of HPC workloads.
2018 Recipients
Linda Gesenhues
Linda Gesenhues is being recognized for her work on finite element simulation of turbidity currents with an emphasis on non-Newtonian fluids.
Gesenhues’ work on turbidity currents may be a useful tool for scientists studying underwater volcanoes, earthquakes or other geological phenomena occurring on the sea floor. Fluids, including water, become turbid when the concentration of particles, such as sediment, rises to a particular threshold. Because of their density, turbid fluids move differently than non-turbid fluids—frequently cascading downward as they are impacted by gravity. The presence of turbid currents can indicate that mud and sand have been loosened from collapsing slopes, earthquakes, or other phenomena. For these reasons, scientists regularly place turbidity sensors on the sea floor to monitor geologic activity.
A challenge of understanding turbidity currents is cataloging the range of possible movements a fluid may make based on the variables in its surrounding environment. For this reason, employing supercomputers, which can process trillions of possible permutations, is an effective approach. The objective of Gesenhues’ PhD project is to obtain a model for numerical simulation of turbidity currents that can predict the characteristics of such flows using non-Newtonian fluid behavior. Non-Newtonian fluids have a higher resistance to deformation than Newtonian fluids; for example, shampoo (a non-Newtonian fluid) loses its shape more slowly than water (a Newtonian fluid).
Thus far, Gesenhues has developed a “solver” (a numerical model) for a 2D simulation of turbidity currents that has been implemented, established and verified. Recently, she augmented her 2D solver to a 3D model. Here, first tests on small 3D benchmark applications were made, including a column collapse.
Markus Höhnerbach
Markus Höhnerbach is being recognized for his work on portable optimizations of complex molecular dynamics codes.
Höhnerbach’s research focuses on creating simulations for many-body potentials in molecular dynamics (MD) simulations. MD simulations are an indispensable research tool in computational chemistry, biology and materials science. In an MD simulation, individual atoms are moved time-step by time-step according to the forces derived from so-called potential, which is the mathematical law that governs the interactions between atoms.
The general idea of Höhnerbach’s PhD project is to develop methods and tools to make the implementation of MD simulations simple and correct by design while generating fast code for a multiple of platforms. For example, in his paper, “The Vectorization of the Tersoff Multi-Body Potential: An Exercise in Performance Portability,” he demonstrated the performance of a type of MD simulations in a wide variety of platforms and processors.
Recently, Höhrenbach has been working with MD simulations for the adaptive intermolecular reactive bond order (AIREBO) potential, which is frequently used to study carbon nanotubes. Many believe carbon nanotubes hold great potential for the future of computer architecture. Höhrenbach wrote a code for the AIREBO potential that has achieved 3x to 4x speedups when performing realistic large-scale runs on current supercomputers.
2017 Recipients
Shaden Smith
Shaden Smith is being recognized for his work on efficient and parallel large-scale sparse tensor factorization for machine learning applications.
Smith' s research is in the general area of parallel and high performance computing with a special focus on developing algorithms for sparse tensor factorization. Sparse tensor factorization facilitates the analysis of unstructured and high dimensional data.
Smith has made several fundamental contributions that have already advanced the state of the art on sparse tensor factorization algorithms. For example, he developed serial and parallel algorithms in the area of Canonical Polyadic Decomposition (CPD) that are over five times faster than existing open source and commercial approaches. He also developed algorithms for Tucker decompositions that are up to 21 times faster and require 28 times less memory than existing algorithms. Smith’s algorithms can efficiently operate on systems containing a small number of multi-core/manycore processors to systems containing tens of thousands of cores.
Yang You
Yang You is being recognized for his work on designing accurate, fast, and scalable machine learning algorithms on distributed systems.
You’s research interests include scalable algorithms, parallel computing, distributed systems and machine learning. As computers increasingly use more time and energy to transfer data (i.e., communicate), the invention or identification of algorithms that reduce communication within systems is becoming increasingly essential. In well-received research papers, You has made several fundamental contributions that reduce the communications between levels of a memory hierarchy or between processors over a network.
In his most recent work, “Scaling Deep Learning on GPU and Knights Landing Clusters,” You’s goal is to scale up the speed of training neural networks so that networks which are relatively slow to train can be redesigned for high performance clusters. This approach has reduced the percentage of communication from 87% to 14% and resulted in a five-fold increase in speed.
2016 Recipients
Johann Rudi
Johann Rudi’s recent research has focused on modeling, analysis and development of algorithms for studying the earth’s mantle convection by means of large-scale simulations on high-performance computers. Mantle convection is the fundamental physical process within the earth’s interior responsible for the thermal and geological evolution of the planet, including plate tectonics.
Rudi, along with colleagues from Switzerland and the United States, presented a paper on mantle convection at SC15, the International Conference for High Performance Computing, that was awarded the ACM Gordon Bell Prize. Rudi and his team developed new computational methods that are capable of processing difficult problems based on partial differential equations, such as mantle convection, with optimal algorithmic performance at extreme scales.
Axel Huebl
Axel Huebl is a computational physicist who specializes in next-generation, laser plasma-based particle accelerators. Huebl and others reinvented the particle-in-cell algorithm to simulate plasma-physics with 3D simulations of unprecedented detail on leadership-scale many-core supercomputers such as Titan (ORNL).
Through this line of research, Huebl also derives models to understand and predict promising regimes for applications such as radiation therapy of cancer with laser-driven ion beams. Interacting closely with experimental scientists, their simulations are showing that plasma-based particle accelerators may yield numerous scientific advances in industrial and medical applications. Huebl was part of a team that were Gordon-Bell prize finalists at SC13.
2015 Winners
Maciej Besta
Maciej Besta, a PhD student in the Scalable Parallel Computing Lab led by Professor Torsten Hoefler at ETH Zurich, won recognition for his project, “Accelerating Large-Scale Distributed Graph Computations”. During first year as a PhD student, Besta successfully completed several projects related to various HPC subdomains, which secured Besta the first Google European Doctoral Fellowship in Parallel Computing.
Besta’s research interests focus on accelerating large-scale distributed graph processing in both traditional scientific domains and in the emerging big data computations. Besta and his advisor also collaborate with researchers from the Georgia institute of Technology on designing a novel on-chip topology for future massively parallel many core architectures that improves the performance of network traffic patterns present in graph processing workloads.
Dhairya Malhotra
Dhairya Malhotra, a PhD student at the University of Texas at Austin actively working in the field of high performance computing, won recognition for his project, “Scalable Algorithms for Evaluating Volume Potentials”. As an undergraduate intern, Malhotra was part of the group that won the 2010 ACM Gordon Bell Prizefor "Petascale Direct Numerical Simulation of Blood Flow on 200K Cores and Heterogeneous Architectures," where he had implemented performance critical GPUcode using CUDA.
Malhotra’s research focuses on developing fast scalable solvers for elliptic PDEs such as Poisson, Stokes and Helmholtz equations. Additionally, a significant contribution of Malhotra’s research has been development of the pvfmm library (Parallel Volume Fast Multipole Method) for evaluating volume potentials efficiently.
2014 Winners
Harshitha Menon
Harshitha Menon is a PhD candidate at University of Illinois Urbana-Champaign, advised by Prof. Laxmikant V. Kale.
She researches on developing scalable load balancing algorithms and adaptive run time techniques to improve the performance of large scale dynamic applications. In addition, Harshitha works on optimizing performance of N-body codes, such as the cosmology simulation application ChaNGa, which is a collaborative research project between UIUC and University of Washington.
Alexander Breuer
Alexander Breuer received his diploma in mathematics in 2011 at Technische Universität München (TUM) and is a fourth year doctoral candidate - advised by Prof. Dr. Michael Bader - at the Chair of Scientific Computing at TUM. In 2012 Alexander and his colleagues established a close collaboration between leading experts in computational science and seismology. Declared goal of this international collaboration is one of the grand challenges in seismic modeling: "Multi-physics ground motion simulation for earthquake-engineering, including the complete dynamic rupture process and 3D seismic wave propagation with frequencies resolved beyond 5 Hz".
Alexander’s research covers optimizations in the entire simulation pipeline, which includes node-level performance leveraging SIMD-paradigms, hybrid and heterogeneous parallelization up to machine-size and co-design of numerics and large-scale optimizations. In 2014 Alexander and his collaborators have been awarded with the PRACE ISC Award and received an ACM Gordon Bell nomination for their outstanding end-to-end performance reengineering of the SeisSol software package.
2013 Winners
Jonathan Lifflander
Jonathan Lifflander is a fifth-year computer science PhD candidate at the University of Illinois Urbana-Champaign, advised by Laximant V. Kale in the Parallel Programming Laboratory.
He researches scalable parallel algorithms in the context of dynamic behavior that lead to highly unstructured mappings: load imbalances in irregular applications, hard system faults, scheduling polices such as work stealing and energy and power constraints. These algorithms are demonstrated to be effective on modern supercomputers, reaching beyond 100K cores. Lifflander is first author on full-length papers in the proceedings of PLDI'13, PPoPP'13, HPDC'12, and IPDPS'12.
Edgar Solomonik
Edgar Solomonik received his BS in 2011 from the University of Illinois Urbana-Champaign. He was honored for his work with the prestigious Computing Research Association (CRA) Outstanding Undergraduate Research Award for 2010. Solomonik is now a PhD candidate working on parallel numerical algorithms at University of California, Berkeley, where he is advised by James Demmel.
His research focuses on developing algorithms that avoid communication traffic and scale on high-performance parallel computers. As a graduate student, Solomonik developed 2.5D algorithms for numerical linear algebra, which asymptotically lower communication at the cost of limited data replication. He also engineered a distributed-memory tensor contraction library which provides key numerical abstractions to the field of high-accuracy electronic structure calculations.
2012 Winners
Ryan Gabrys
Ryan Gabrys received his B.S in Computer Science/Math from the University of Illinois Champaign-Urbana in 2005. He received the master of engineering degree from UCSD with a focus in signals and systems. He was awarded the SMART scholarship in 2010 and is currently pursuing a PhD in electrical engineering at UCLA.
Ryan's research interests include information theory and coding schemes with applications to storage and underwater acoustics. His work in storage has focused primarily on error-correction codes for Flash memory. Using experimental data collected from real Flash memory devices, these codes were demonstrated to prolong the lifetime of the underlying device by more than 1.5x. His work in underwater acoustics has been shown to potentially double the possible transmission rate of current modems used by naval submarines.
Amanda Peters Randles
Amanda Peters Randles graduated from Duke University in 2005 with a double major in both Computer Science and Physics. While at Duke, she worked on a range of projects both fundamental and more applied, including near-infrared spectroscopy, experimental studies of the Rb/E2F pathway, and bioinformatics programming. Following her time at Duke she spent spent three years at IBM as part of the Blue Gene development team where she also founded the IBM New Inventors Connection. In 2010, she received a Master's Degree in Computer Science from Harvard University, where she is pursuing a PhD in Applied Physics with a secondary major in Computational Science in Professor Efthimios Kaxiras's group on his Multiscale Hemodynamics project.
The focus of Amanda's thesis research is a large-scale model coupling the fluid dynamics of blood plasma coupled with the movement of red blood cells which she hopes will elucidate trends and aid prognosis of cardiovascular disease based on high-resolution patient-specific data.
2011 Winners
Ignacio Laguna
Ignacio Laguna was born in Panama and received the BSc degree from the University of Panama in 2002. He is a PhD student in the School of Electrical and Computer Engineering at Purdue University, West Lafayette, Indiana under the supervision of Professor Saurabh Bagchi. He received the MSc degree from Purdue in 2008.
Ignacio's research interests are fault detection and diagnosis in large-scale distributed applications. In his PhD dissertation, he proposes techniques to isolate faults that affect large-scale HPC applications such as those that arise from software bugs, hardware errors and unexpected runtime conditions. He has developed AutomaDeD, a tool that detects the abnormal tasks and code regions that are correlated with the manifestation of a fault. AutomaDeD is the first fault-detection framework that uses task similarity to isolate faults in a scalable manner and it has been demonstrated on the largest supercomputers with over a hundred thousand processes. His research ideals are to design and evaluate techniques for the next generation of large-scale parallel debugging and fault-detection tools.
Xinyu Que
Xinyu Que is a Ph.D. candidate of Parallel Architecture and System Laboratory (PASL) in the Department of Computer Science & Software Engineering at Auburn University. He earned the master's degree in Computer Science from University of Connecticut in 2009.
Xinyu's research interests include Global Address Space Programming Models, Cloud Computing, MapReduce and Hadoop, which cover two different areas. The first is scalable runtime systems for Partitioned Global Address Space (PGAS) programming models on large-scale computing platforms, which seeks to address scalability challenges for scientific applications running on contemporary petascale supercomputers such as Jaguar at ORNL, and future exascale system. The second is cloud computing, which aims to optimize Hadoop to provide high-performance and energy efficient MapReduce programming model for large-scale data analytics.
2010 Winners
Aparna Chandramowlish-waran
Aparna Chandramowlishwaran is a PhD candidate in the School of Computational Science and Engineering at Georgia Institute of Technology and is advised by Prof. Richard Vuduc. She received her B.E. in Computer Science and Engineering from Anna University, India in 2007 and M.S. in Computational Science and Engineering from Georgia Tech in 2010.
Aparna's main research area is high-performance computing. Her thesis tries to answer fundamental questions on the design, analysis, and tuning of computational science and engineering algorithms in light of algorithm-architecture co-design. Aparna is also interested in novel parallel programming models, and demonstrated the ability of Intel's Concurrent Collections in expressing asynchronous-parallel algorithms. She has developed one of the fastest implementations and analyses for the Fast Multipole Method, an N-body computation, and was part of the team that won the ACM Gordon Bell Prize in 2010. Aparna is a recipient of the Best Paper award (software track) at IPDPS 2010. She is a member of ACM, IEEE, and SIAM.
Amanda Peters Randles
Amanda Peters Randles graduated from Duke University in 2005 with a double major in both Computer Science and Physics. While at Duke, she worked on a range of projects both fundamental and more applied, including near-infrared spectroscopy, experimental studies of the Rb/E2F pathway, and bioinformatics programming. Following her time at Duke she spent spent three years at IBM as part of the Blue Gene development team where she also founded the IBM New Inventors Connection. In 2010, she received a Master's Degree in Computer Science from Harvard University, where she is pursuing a PhD in Applied Physics with a secondary major in Computational Science in Professor Efthimios Kaxiras's group on his Multiscale Hemodynamics project.
The focus of Amanda's thesis research is a large-scale model coupling the fluid dynamics of blood plasma coupled with the movement of red blood cells which she hopes will elucidate trends and aid prognosis of cardiovascular disease based on high-resolution patient-specific data.
Earlier Recipients
2009 Fellows
Nathan Tallent (Rice University)
Abhinav Bhatele (University of Illinois at Urbana/Champaign)
2008 Fellows
Yaniv Erlich (Cold Spring Harbor Laboratory)
Douglas J Mason (Harvard University)
2007 Fellows
Yong Chen (Illinois Institute of Technology)
Mark Hoemmen (University of California at Berkeley)
Arpith Jacob (Washington University in St. Louis)
Chao Wang (North Carolina State University)