Webinars

Tuesday, April 30, 2019 - 10AM EDT

Presented by Doug Kothe, Director of the Exascale Computing Project at Oak Ridge National Laboratory and moderated by John West, Director of Strategic Initiatives at Texas Advanced Computing Center, and ACM SIGHPC Vice-Chair.

The mission of the US Department of Energy (DOE) Exascale Computing Project (ECP) was initiated in 2016 as a formal DOE project and extends through 2022. The ECP is designing the software infrastructure to enable the next generation of supercomputers—systems capable of more than 1018 operations per second—to effectively and efficiently run applications that address currently intractable problems of strategic importance. The ECP is creating and deploying an expanded and vertically integrated software stack on US Department of Energy (DOE) HPC exascale and pre-exascale systems, thereby defining the enduring US exascale ecosystem.

The project is a joint effort of two DOE programs: the Office of Science Advanced Scientific Computing Research Program and the National Nuclear Security Administration Advanced Simulation and Computing Program. ECP's RD&D activities, which encompass the development of applications, software technologies, and hardware technologies and architectures, is carried out by over 100 small teams of scientists and engineers from the DOE national laboratories, universities, and industry.

Presented by ACM Fellow Jack Dongarra, Professor at the University of Tennessee and moderated by John West, Texas Advanced Computing Center, SIGHPC Vice-Chair.

In this talk we examine how high performance computing has changed over the last 10 years and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder.

Presented by Brian E. Mitchell (GE Global Research) and moderated by Wilfred Pinfold

This webinar will discuss the use of high performance computing (HPC) at in the design of aircraft jet engines and gas turbines used to generate electrical power. HPC is the critical enabler in this process, but applying HPC effectively in an industrial design setting requires an integrated hardware/software solution and a clear understanding of how the value outweighs the costs. This webinar will share GE’s perspective on the successful deployment and utilization of HPC, offer examples of HPC’s impact on GE products, and discuss future trends.

What Attendees can Expect to Learn:

  • How GE uses HPC to improve products that benefit millions of people everyday

  • How we approached the challenge of creating a balanced advanced computing environment from both hardware and software perspectives

  • What the rapid evolution of HPC hardware due to the “push to Exascale” means for the physics-based software of the future

Also see the offline responses to audience questions!


Presented by Salman Habib (Argonne National Laboratory and the University of Chicago) and Rajeev Thakur (Argonne National Laboratory, the University of Chicago, and the Northwestern University)

This webinar provides an introduction to basic issues in performance and scalability for large-scale applications on parallel supercomputers. Because future supercomputing architectures–leading up to exascale–will follow diverse paths, it is important to articulate principles for designing applications that can run at extreme scale and high performance on a variety of systems. It is unlikely that, in the near term, automated solutions can provide more than incremental improvements, although this remains an important research direction. Given this situation, it is important, when developing new applications, to be cognizant of future hardware possibilities and to follow a design approach that allows for rapid deployment and optimization, including the use of multiple algorithms targeted to different architectures. This approach is presented using a concrete example, the Hardware/Hybrid Accelerated Cosmology Code (HACC) framework. HACC runs efficiently on all current supercomputer architectures and scales to the largest of today’s systems.

Presented by Mark Ebersole, NVIDIA; moderated by Jeff Hollingsworth, SIGHPC Vice-Chair

The past decade has seen a shift from serial to parallel computing. No longer the exotic domain of supercomputing, parallel hardware is ubiquitous and software must follow: a serial, sequential program will use less than 1% of a modern PC's computational horsepower and less than 4% of a high-end smartphone. GPUs have proven themselves as world-class, massively parallel accelerators, from supercomputers to gaming consoles to smartphones, and CUDA is the platform best designed to access this power. In this webinar, we'll cover the many different ways of accelerating your code on GPUs; from GPU-accelerated libraries, to directive-based programming using OpenACC directives, and finally to writing CUDA directly in languages such as C/C++, Fortran, or Python. In addition to covering the current state of massively parallel programming with GPUs, we will briefly touch on future challenges and potential research projects. Finally, you will be provided with a number of resources to try CUDA yourself and where to go to learn more.

Presented by Bill Gropp, University of Illinois Urbana-Champaign; moderated by John West, SIGHPC Exec Committee

This webinar will provide an introduction to parallel execution models, focusing on how programmers think about writing programs. Does the way that programmers or algorithm developers think about the way a parallel computer works influence the approaches that they take? Can the choice of programming approach lead to inefficient solutions? Do we need new ways to program parallel systems? This session will explore common approaches for developing parallel programs and how they can limit scalability and reliability--whether the programs are for single chip parallelism or the world's largest parallel computers. The importance of an execution model and its relationship to programming models and programming systems will be covered, and why we need to consider new execution models for the parallel systems of the future.

Also, see the offline responses to audience questions

SIGHPC participates in ACM's Learning Webinar series by identifying speakers and topics of interest to SIGHPC members, then organizing presenters and moderators. If you have suggestions for topics/speakers that would find a broad audience among practitioners and/or researchers, please contact communications@sighpc.org.

You can watch our entire archive of past ACM Learning Webinars any time you want at webinar.acm.org/.