Rethinking the debugger

ScalaCamp 2014, Krakow
Iulian Dragos, Lightbend Inc.

# Reactive applications - react to events - react to load - react to failure - react to users
# Essentially, *async* are non-blocking

Async programs

Different parts of the program are executed in different logical threads
.. and the programmer must ensure correct coordination and communication between parts
## Concurrency abstractions * Threads and locks are replaced by higher-level abstractions - **Futures** (delayed computation) - **Actors** (message passing) - Parallel collections (implicit parallelism)
# Futures A future is a proxy for a value not-yet computed, whose computation happens on a **different** (logical) thread ```scala val fTweets = future { getAllTweets(user) } // also a future val nrOfTweets = => ts.size) nrOfTweets onSuccess { println } ```
# How do we debug such programs?
# Debugging - go from (undesirable) effects **back** to causes - attempt a fix - rinse and repeat (go **forward**)
# Going back in time
## Call stack? ``` java.lang.Exception at test.FuturesTest$$anonfun$simpleUse$1$$anonfun$2$$anonfun$apply$1.apply$mcI$sp(FuturesTest.scala:27) at test.FuturesTest$$anonfun$simpleUse$1$$anonfun$2$$anonfun$apply$1.apply(FuturesTest.scala:25) at test.FuturesTest$$anonfun$simpleUse$1$$anonfun$2$$anonfun$apply$1.apply(FuturesTest.scala:25) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$ at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107) at scala.concurrent.forkjoin.ForkJoinTask.doExec( at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask( at scala.concurrent.forkjoin.ForkJoinPool.runWorker( at ```

Actors, futures, parallel collections

use a thread-pool for executing their computations

  • Each future (or actor) may execute on a different thread
  • The call-stack and the control-flow are no longer the same
# Let's fix it
# Debuggers - Event-based (log/trace based) - record during execution - *post-mortem* debugging - Breakpoint-based - interact with a *live* sytem - the standard for sequential programs
## Improving live debuggers - Automatically collect call-stacks at **points-of-interest** - future creation/chaining - actor message send - .. *all* data on the stack frame - location, local variable values
## Improving live debuggers - attach it to one or more relevant objects - future and future body - message object - Present them in addition to existing information when a breakpoint is hit


  • automatically add a breakpoint at POIs (method entry point)
  • when a breakpoint is hit
    • collect stack-data
    • associate data with object of interest (usually the argument to the POI)
    • resume program
# Example For an actor, the POI is a message send: ```scala override def !(msg: Any) { send(msg, Actor.rawSelf(scheduler)) }``` - `$bang` - the carrier is `msg`
## Overhead ![timings](img/async-debugging-timings.png)
# Other uses - code that reifies continuations - Play *iteratees* - parallel collections - data-flow debugging
# Demo!
# Going forward
# Step Into? Actors communicate by sending messages like method calls ## *but they are asynchronous*
# Step Message * Let's add the ability to step-with-message * stop the program when the **next message sent** by this actor is processed - this can happen in a **different** thread - other messages might be exchanged in the meantime by these two actors
# Implementation notes * the next message sent from this actor is saved * each dispatched message is compared with the *target* * when found, do one more step-into and **break**
# What else?
# Custom info * sender, supervisor, children * actor mailbox * success/failure handlers
# Collect only some *traces* - user-defined locations (packages, classes) - infer locations based on where breakpoints are set - collect only some *frames* (exclude scala.* frames)
# Summary * call-stack -> async stack * step-into -> step-with-message
# Thank you!