Skip to main content

Accessibility menu

Skip to main content Skip to footer

Professor from UC Berkeley to speak at UW-L’s Distinguished Lecture Series in Computer Science

Posted 6:15 p.m. Friday, Oct. 11, 2013

UC Berkeley Professor Katherine Yelick to present for UW-La Crosse’s Distinguished Lecture Series in Computer Science Monday, Oct. 21.

[caption id="attachment_27298" align="alignright" width="300"]Headshot image of Katherine Yelick. Katherine Yelick, associate laboratory director for Computing Sciences at Lawrence Berkeley National Laboratory and professor at University of California Berkeley[/caption] Katherine Yelick, associate laboratory director for Computing Sciences at Lawrence Berkeley National Laboratory and professor at University of California Berkeley, will be the speaker for UW-La Crosse’s Distinguished Lecture Series in Computer Science Monday, Oct. 21. Yelick will speak on “Avoiding, Hiding and Managing Communication” at 11 a.m. with registration starting at 10:30 a.m. at the Cleary Alumni & Friends Center. Her keynote presentation “More Data, More Science and … Moore’s Law?” will be at 5 p.m. with registration at 4:30 p.m. at the Cleary Alumni & Friends Center. Admission is free to both presentations. The UW-L Distinguished Lecture Series in Computer Science attracts internationally-recognized leaders in computer science for lectures, technical symposia and workshops. The series was started in 1990 and is supported through the UW-L Foundation and the College of Science and Health.

If you go —

Who: Katherine Yelick What: Symposium: “Avoiding, Hiding and Managing Communication” Where: Cleary Alumni & Friends Center, UW-L When: 10:30 registration; 11 a.m. lecture, Monday, Oct. 21 Admission: Free

If you go —

Who: Katherine Yelick What: Keynote address: “More Data, More Science and … Moore’s Law?” Where: Cleary Alumni & Friends Center, UW-L When: 4:30 registration; 5 p.m. lecture, Monday, Oct. 21 Admission: Free

Symposium: ‘Avoiding, Hiding and Managing Communication’

Future computing system designs will be constrained by power density and total system energy, and will require new programming models and implementation strategies. Data movement in the memory system and interconnect will dominate running time and energy costs, making communication cost reduction the primary optimization criteria for compilers and programmers. Communication cost can be divided into latency costs, which are per communication event, and bandwidth costs, which grow with total communication volume. The trends show growing gaps for both of these relative to computation, with the additional problem that communication congestion can conspire to worsen both in practice. In this talk Yelick will discuss prior work on open problems in optimizing communication, starting with PGAS languages. This involves reducing the communication cost, through overlap, and the frequency through caching and aggregation. Much of the compile-time work in this area was done in the Titanium language, where strong typing and data abstraction aid in program analysis, while UPC compilers tend to use more dynamic optimizations. There are still many open questions on the design of languages and compilers, especially as the communication hierarchies become deeper and more complex. Bandwidth reduction often requires more substantial algorithmic transformations, although some techniques, such as loop tiling, are well known. These can be applied as hand-optimizations, through code generation strategies in auto-tuned libraries, or as fully automatic compiler transformations. Less obvious techniques for communication avoidance have arisen in the so-called “2.5D” parallel algorithms, which she will describe more generally as “.5D” algorithms. These ideas are applicable to many domains, from scientific computations to database operations. In addition to having provable optimality properties, these algorithms also perform well on large-scale parallel machines. Yelick will end by describing some recent work that lays the foundation for automating transformations to produce communication optimal code for arbitrary loop nests.

Keynote lecture: ‘More Data, More Science and … Moore’s Law?’

In the same way that the Internet has combined with web content and search engines to revolutionize every aspect of our lives, the scientific process is poised to undergo a radical transformation based on the ability to access, analyze, and merge large, complex data sets. Scientists will be able to combine their own data with that of other scientists, validating models, interpreting experiments, re-using and re-analyzing data, and making use of sophisticated mathematical analyses and simulations to drive the discovery of relationships across data sets. This “scientific web” will yield higher quality science, more insights per experiment, an increased democratization of science and a higher impact from major investments in scientific instruments. What does this “big science data” view of the world have to do with HPC? The terms “high performance computing” and “computational science” have become nearly synonymous with modeling and simulation, and yet computing is as important to the analysis of experimental data as it is to the evaluation of theoretical models. Due to the exponential growth rates in detectors, sequencers and other observational technology, data sets across many science disciplines are outstripping the storage, computing, and algorithmic techniques available to individual scientists. Along with simulation, experimental analytics problems will drive the need for increased computing performance, although the types of computing systems and software configurations may be quite different. Yelick will describe some of the opportunities and challenges in extreme data science and its relationship to high performance modeling and simulation. Including her research in the development of high performance, high productivity programming models. Her current research is largely focused on the problem of avoiding and minimizing the cost of communication, as data movement both to memory systems and between processors a major barrier to scalability and energy efficiency. In both simulation and analytics, programming models are the “sandwich topic,” squeezed between application needs and hardware disruptions, yet often treated with some suspicion, if not outright disdain. But programming model research is, or at least should be, an exemplar of interdisciplinary science, requiring a deep understanding of applications, algorithms and computer architecture in order to map the former to the latter. She will use this thread to talk about her research interests, how she selected various research topics, and what she sees as current open problems in parallel programming model design and implementation. For further information about the lecture contact Steve Senger, Computer Science: 608.785.6805

Permalink

Share your news suggestions

Submit your news suggestions using UWL Share by no later than noon on Wednesdays preceding the next Monday's edition.

For more information, contact University Marketing & Communications at 608.785.8487.