Ranjit Jhala, UC San Diego
The last few decades have seen tremendous strides in various technologies for reasoning about programs. However, we believe these technologies will only become ubiquitous if they can be seamlessly integrated within programming languages with mature compilers, libraries and tools, so that programmers can use them continously throughout the software development lifecycle (and not just as a means of post-facto validation.)
In this talk, we will describe how refinement types offer a path towards integrating verification into existing host languages. We show how refinements allow the programmer to extend specifications using types, to extend the analysis using SMT, and finally, to extend verification beyond automatically decidable logical fragments, by allowing programmers to interactively write proofs simply as functions in the host language.
Finally, we will describe some of the lessons learned while building and using the language integrated verifier LiquidHaskell. We will describe some problems that are considered hard in theory, but which turn out to be easy to address in practice, and we will describe other problems which might appear easy, but are actually giant roadblocks that will have to be removed to make verification broadly used.
Erik Meijer, Facebook
In ancient times, the dream of alchemists was to mutate ordinary metals such as lead into noble metals such as gold. However, by using classic mathematics, modern physicists and chemists are much more successful in understanding and transforming matter than alchemists ever dreamt of.
The situation in software seems to be the opposite. Computer scientists have been unsuccessful in their quest to reliably turn formal specifications into code and to accurately understand the mechanics of side-effecting computation. On the other hand, by using classic mathematics, modern software alchemists have been extremely successful in mutating training data into pure functions using various machine learning techniques, in particular deep learning.
Mechanically learning code from training data is often referred to as "Software 2.0" or "Learning-based development". This new paradigm of software creation will require a radical rethinking of the ancestral software engineering and imperative programming practices that have been developed in the second half of the last century.
In this talk we will discuss how we are building new probabilistic frameworks and differentiable programming languages that support the composition and construction of learnable code, as well as how we can leverage machine learning at every level of the software stack to make developers more productive and services & products more efficient.
Margaret Martonosi, Princeton University
For decades, Moore’s Law and its partner Dennard Scaling have driven technology trends that have enabled exponential performance improvements in computer systems at manageable power dissipation. With the slowing of Moore/Dennard improvements, designers have turned to a range of approaches for extending scaling of computer systems performance and power efficiency. These include specialized accelerators and heterogeneous parallelism. Unfortunately, the scaling gains afforded by these techniques come at the expense of degraded hardware-software abstraction layers, increased complexity at the hardware-software interface, and increased challenges for software reliability, interoperability, and performance portability. As the role of the Instruction Set Architecture (ISA) interface shifts rapidly, new opportunities and challenges exist for hardware, software, language and tool designers in this “Post-Instruction Set Architecture” era of shifting abstractions. The talk will cover a range of design opportunities and challenges, with a particular emphasis on my group’s recent work on verification methods for memory models in hardware-software systems.
Wed 20 Jun
|09:00 - 10:00|
Thu 21 Jun
|09:00 - 10:00|
Fri 22 Jun
|09:00 - 10:00|