Sunday, September 18, 2016

HOPE 2016 Talk Summaries

I'm attending ICFP 2016 in Nara, Japan (near Osaka) this year. In previous years, I've really enjoyed Edward Yang's conference talk summaries, and decided that this year, I would take notes about the talks I attended and put them online myself.

Today, I attended the HOPE 2016 workshop on higher-order programming with effects, and took the following notes. These are all rough notes, taken in haste and based on a highly-jetlagged listener's poor listening comprehension. So assume all errors and misunderstandings are my fault!

Keynote (Leo White)

Due to jetlag, I overslept and missed it! Alas.

Effects

  • Effects via capabilities, Fengyin Liu (speaker)

    The idea behind this work is to introduce explicit capability values that enable or disable effects. So you might have a capability c : IO, which must be passed to all IO operations. If you don't have the capability to pass to the IO operation, you can't invoke them.

    print : IO -> string -> unit

    Naively, though, there is an issue arising from the ability of closures to capture effect capabilities.

    So two function spaces are introduced A => B, which is the usual ML-style function space, and A -> B, which is a function space which cannot capture capabilities or ML-style functions.

    Interestingly, the full substitution lemma doesn't hold: you have to prove a value substitution theorem instead. This suggests to me that there might be a modal variant of this system, Davies-Pfenning style.

    Liu suggested some type equivalences like A -> (B => C) == A -> B -> C, because the outer -> prevents the capture of capabilities. They can't prove the soundness of this by induction, of course, so I suspect they'll eventually want a logical relation to establish soundness. (A logical relation for Dotty would be exciting -- maybe chat with Amal Ahmed & Nada Amin about this?)

    Polymorphism introduces some subtleties (ie, you might not know whether a type variable will be instantiated to a pure type), and they also worry about verbosity in practice (which might hinder adoption by Scala programmers).

  • A Logical Acount of a Type-and-Effect-System, Kasper Svendsen (speaker)

    They have a parallel ML-style calculus with reads, writes and CAS. Locations are approximated in the type system with regions.

    • The region context is split into a public and private part, to control which locations you can get interference on.

    • Function types are annotated with which public, which private regions, and which read/write effects they have.

    • New regions can be introduced with dynamic extent. Private regions can be shared temporarily during parallel composition.

    • The type system makes region variables implicit -- this info is only in the typing derivation. So all terms look like ML terms.

    The goal is to prove the correctness of program transformations which rely on the semantics of effects. For example, a sequential composition can be equated to a parallel composition

    e; e' == e || e'

    when there is no interference between these two terms.

    He explains the intuition with binary logical relation, which are formulated within the Iris program logic (paper at POPL 2015). Here, the logical relation is a per-type family of simulation relation, encoded as Hoare triples in a unary program logic.

    Here, they use a really neat feature of Iris. Since Iris has any monoidal resource, they use simulation relations as resources! Then they express the parallel composition via a separating conjunction of simulations, one for e and one for e'. The fact that e and e' don't interfere means that e' steps are stuttering steps for e, and vice-versa. They even let simulation relations share state, when the state is read-only.

  • Simple Dependent Polymorphic IO Effects, Amin Timay (speaker)

    They want a logical relation for a type-and-effect system for IO. The IO effects are like regexps, but have effect variables, channels, and quantification over what is written -- eg, cat would have the effect:

    (forall x. Rda; Wrb)*

    to indicate that it writes what it reads from a to b. They want the relation to relate (say) that a buffered implementation of cat refines the unbuffered version, and also to validate the parallel composition transformation for non-interfering operations.

    The language of effects is very rich, so I don't have a good feel for what it can and can't express. Eg, the reasoning for the buffering optimization was entirely type-based, based on the effect subtyping order!

Verification

  • Concurrent Data Structures Linked in Time, German Delbianco (speaker)

    This talk was about Hoare-style verification of highly concurrent data structures, such as "concurrent snapshots".

    Eg, you have a shared array a, and a function write(i,x) which makes an atomic update to the array, and a function scan(), which returns a snapshot of the state of the array at a particular instant.

    He then presented Jayanti's algorithm for this, and gave a nice intuition for why the right notion for correctness of the snapshots was linearizability. What made the algorithm tricky is that the linearization points for the algorithm are not fixed -- they are not fixed (they can be vary with the runtime inputs).

    Then he introduced the notion of events in the ghost state, which are organized for each thread into subject and environment events, and which are each connected to an instant of time. The variation in where linearization points occur could then arise via modifications of when each event happened -- which is an update on the "linked list" of time, which can be modelled as ghost state!

    This has all been verified in FCSL, a Coq-based program logic that Ilya Sergey and Aleks Nanevski are some of the main forces behind.

    Ralf Jung asked if they had verified any clients, and German said they had used it for a few small clients, but nothing big. Ilya said that people just assumed snapshots were "obviously" good, but no one had any really good/complex examples of them. He said that they hoped to figure some out now that they had a declarative spec of them.

  • Growing a Proof Assistant, William Bowman (speaker)

    WB began with an example of some notations for an informal type soundness proof. He pointed out that some of the notations (like BNF) are formal, and others (like defining let-binding in terms of lambdas) are informal.

    Then he pointed out that there are tools like Ott which compile BNF to Coq/Agda/LaTeX, and that most proof assistants (like Coq) have APIs to define notations, tactics, and plugins. Then he said that this wasn't enough, and that he wanted to build a proof assistant focused on extensibility.

    The core type theory was TT (basically MLTT with a countable universe hierarchy), and the extensibility framework was Racket (to reuse as much of the macro machinery as possible).

    Then he began explaining how cur could be used to implement pattern matching. Things like recursive functions required compile-time state to communicate information from outside the local syntax tree.

    Then he moved on to defining tactics, and gave a small language for this. He needed to support fully elaborating terms (so that user tactics could be defined solely on kernel syntax), and also a way to run cur code at expansion-time to support Mtac style tactics.

    This made me wonder about phase separation, but then he immediately put this and several other issues (such as how to preserve properties like parametricity while supporting reflection on syntax).

    Someone asked about how to combine extensions. WB replied that macros support this because of the architecture of Racket macros, which essentially support open recursion to permit multiple macros to run at once. He also remarked that this creates classical problems from OO like diamond inheritance.

Compilation

  • Algebraic Effects for Functional Programming, Daan Leijen (speaker)

    Daan Leijen gave a talk about adding algebraic effects to Koka. Koka had effect types based on monads, but (a) he wanted a better story for composition, and (b) he wanted to be able to handle exotic control operators (like yield, exceptions, and async/await) in a uniform way.

    He has a full compiler for effect handlers in Koka, with row-polymorphic effect typing. The compilation scheme uses a type-directed CPS translation to implement effect handlers targeting JS, .NET, JVM (which are rather direct-style). Then he talked a bit about the semantics and syntax of Koka to set the stage.

    Next, he talked about examples, starting with exceptions. He mentioned that row types in Koka permit duplicated lables. I wonder how Koka picks?

    I also wondered about using algebraic effects for the "mock object" pattern from OO testing.

    Then he showed iteration, and remarked that handlers permit the handler to control whether iteration stops or not. Then he showed asynchronous callbacks (Node-style), and showed that you could pass the handler's resume continuation as an argument.

    Then he talked about the compilation to managed code where first-class continuations don't exist. This uses a type-directed style to avoid using CPS when it doesn't need it. The main issue is with effect polymorphism, which he resolves by compiling it twice in both ways, and then using a worked-wrapper transformation to pick the right representation.

    SPJ asked if he depended upon a defined order of evaluation. Daan said this was about effects for Haskell, and handed off to Sam Lindley, who said that the situation in Haskell was the same as in Koka -- you have to say what the evaluation order is (?).

  • Administrative Normal Form, Continued, Luke Maurer

    CPS is good for optimization, but due to the costs, not everyone uses it. Flanagan pointed out that some CPS optimizations work in direct style, but it's not all of then.

    Eg, case-of-case often creates values that go to known call sites, which shouldn't be allocated. This work extends ANF to support these, and to some significant effect in heap allocation rates.

    Case-of-case can be solved by pushing in the context, and while there are engineering issues of code duplication, it can often be helpful. Many compilers do this.

    However, it doesn't work for recursive functions. In CPS this transformation happens "almost for free". That is, you can do the static argument transformation, and then inline the static continuation to get the beta-reductions to fire. This is "contification".

    So how does contification work in ANF? He notes that the static argument transformation only worked because the program was a tail call. So they tag functions which are always tail called (they are called "labels"). Then they add a new rewrite which moves outer cases into a label.

    He gave the operational semantics for join points, which are pretty sensible.

    Then he pointed out that it helps with list fusion. An unfold uses a step function state -> option (a x state), which lets you eliminate intermediate lists, but introduces useless Just/Nothing cruft. But the new label/case-of-case machinery handles this and reduces many of them, and some benchmarks kill all of them.

    This is in a new branch of GHC.

    I asked how they would know if they had all the new constructs they needed. SPJ said he didn't know any CPS optimizations that could not be handled now, but that he didn't have a proof. Someone asked about one benchmark which allocated less but ran more slowly. I didn't understand the answer, though. :( Sam Lindley asked about the connection to SSA and phi-functions. Luke said that labels are much like phi-functions in SSA.

Semantics

  • Functional models of full ground, and general, reference cells, Ohad Kammar

    How can we understand reference cells. They have multiple axes:

    • Locality -- freshness of newly allocated cells. (ie, ref 0 == ref 0 is always false)
    • Ground vs Non-ground -- can we point to computations which can modify the heap (ref (int -> int))?
    • Full storage -- semantics can depend on shape of the heap: (ref ref bool)

    This talk is about full ground state. Approach of the talk is denotational semantics for full ground state, to get an equational theory for programs. He wants to use it to analyze the value restriction, and relax it if possible. He also wants to validate runST.

    He found a bug in his semantics last night, so he will discuss the general techniques used rather than a particular model.

    • Levy introduced a trick to handle circular ground state, using names for reference types. (A la class names in OO).

    • Kripke worlds are a finite set of locations, and a name for each location. A world morphism is a name-preserving injection. Worlds have monoidal structure. The language lives in the functor category over worlds, and so can interpret CCCs, and references at a world w are interpreted as the set of locations of that type.

    • To model computations we need a monad

    To decide which equations we need, we have the usual deref/assignment/allocation equations. We don't have a complete axiomatization of state, so we have to just pick some. So they picked a nontrivial one, the equation for runST.

    Then he gave the wrong monad, which used ends to model stores, but I don't have a good feel for them. It validated masking, but not dereference. Then he gave a second monad which had too little structure, in that it modelled deref but not masking.

    Then he gave a small type system for runST. It interpreted each region label in runST as a world (as above).

    Lars Birkedal asked about which equations he was interested in, and Ohad named some of them. Daan Leijen said he had done an operational proof of runST and didn't like, and encouraged Ohad to keep going.