Resources‎ > ‎

Webinars

SIGHPC participates in ACM's Learning Webinar series by identifying speakers and topics of interest to SIGHPC members, then organizing presenters and moderators.  If you have suggestions for topics/speakers that would find a broad audience among practitioners and/or researchers, please contact communications@sighpc.org.



Current Trends in High Performance Computing and Challenges for the Future (Feb 7, 2017)


Presented by ACM Fellow Jack Dongarra, Professor at the University of Tennessee and moderated by John West, Director of Strategic Initiatives at the Texas Advanced Computing Center; SIGHPC Vice-Chair, moderated this session.

In this talk we examine how high performance computing has changed over the last 10 years and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder.

You can view our entire archive of past ACM Learning Webinars on demand at webinar.acm.org/.
Presented by Brian E. Mitchell (GE Global Research) and moderated by Wilfred Pinfold

https://sites.google.com/a/sighpc.org/sighpc/resources/webinars/SIGHPCWebinar_1oct15_screen.jpg
This webinar will discuss the use of high performance computing (HPC) at in the design of aircraft jet engines and gas turbines used to generate electrical power. HPC is the critical enabler in this process, but applying HPC effectively in an industrial design setting requires an integrated hardware/software solution and a clear understanding of how the value outweighs the costs. This webinar will share GE’s perspective on the successful deployment and utilization of HPC, offer examples of HPC’s impact on GE products, and discuss future trends.


What Attendees can Expect to Learn:

  • How GE uses HPC to improve products that benefit millions of people everyday
  • How we approached the challenge of creating a balanced advanced computing environment from both hardware and software perspectives
  • What the rapid evolution of HPC hardware due to the “push to Exascale” means for the physics-based software of the future

Also see the offline responses to audience questions!


Extreme Scaling and Performance across Diverse Architectures  (Mar 2015)

Presented by Salman Habib (Argonne National Laboratory and the University of Chicago) and Rajeev Thakur (Argonne National Laboratory, the University of Chicago, and the Northwestern University)

This webinar provides an introduction to basic issues in performance and scalability for large-scale applications on parallel supercomputers. Because future supercomputing architectures–leading up to exascale–will follow diverse paths, it is important to articulate principles for designing applications that can run at extreme scale and high performance on a variety of systems. It is unlikely that, in the near term, automated solutions can provide more than incremental improvements, although this remains an important research direction. Given this situation, it is important, when developing new applications, to be cognizant of future hardware possibilities and to follow a design approach that allows for rapid deployment and optimization, including the use of multiple algorithms targeted to different architectures. This approach is presented using a concrete example, the Hardware/Hybrid Accelerated Cosmology Code (HACC) framework. HACC runs efficiently on all current supercomputer architectures and scales to the largest of today’s systems.
PDF logo     


Achieve Massively Parallel Acceleration with GPUs (Feb 2014)

Presented by Mark Ebersole, NVIDIA; moderated by Jeff Hollingsworth, SIGHPC Vice-Chair

Ebersole sample slide
The past decade has seen a shift from serial to parallel computing. No longer the exotic domain of supercomputing, parallel hardware is ubiquitous and software must follow: a serial, sequential program will use less than 1% of a modern PC's computational horsepower and less than 4% of a high-end smartphone. GPUs have proven themselves as world-class, massively parallel accelerators, from supercomputers to gaming consoles to smartphones, and CUDA is the platform best designed to access this power.   In this webinar, we'll cover the many different ways of accelerating your code on GPUs; from GPU-accelerated libraries, to directive-based programming using OpenACC directives, and finally to writing CUDA directly in languages such as C/C++, Fortran, or Python. In addition to covering the current state of massively parallel programming with GPUs, we will briefly touch on future challenges and potential research projects. Finally, you will be provided with a number of resources to try CUDA yourself and where to go to learn more.
PDF logo


Changing How Programmers Think about Parallel Programming (July 2013)

Presented by Bill Gropp, University of Illinois Urbana-Champaign; moderated by John West, SIGHPC Exec Committee

Gropp sample slide
This webinar will provide an introduction to parallel execution models, focusing on how programmers think about writing programs.  Does the way that programmers or algorithm developers think about the way a parallel computer works influence the approaches that they take? Can the choice of programming approach lead to inefficient solutions? Do we need new ways to program parallel systems? This session will explore common approaches for developing parallel programs and how they can limit scalability and reliability--whether the programs are for single chip parallelism or the world's largest parallel computers. The importance of an execution model and its relationship to programming models and programming systems will be covered, and why we need to consider new execution models for the parallel systems of the future.
PDF logo            Also, see the offline responses to audience questions

Ċ
Michela Taufer,
May 10, 2015, 10:00 AM
Ċ
Cherri Pancake,
Apr 23, 2014, 9:26 AM
Ċ
Cherri Pancake,
Apr 23, 2014, 9:24 AM
Ċ
Michela Taufer,
Oct 8, 2015, 5:28 PM
Ċ
Michela Taufer,
Nov 8, 2015, 4:31 PM