dcreager.net

Tail call optimization and debuggability

2023-06-12

Came across a good thread on Mastodon today about tail-call optimization (TCO) and how that affects the debuggability of your program.

Stephen Kell, June 12, 2023 [recurse.social]

There's the usual point about how tail calls typically break runtime stack traces, since that requires enumerating the activation records of the frames on the stack. So if you reuse the current frame for a tail call that you're about to make, you necessarily lose information in the stack trace.

There are also some interesting points made about when and how we expect a stack trace to be that detailed. (Automatically inserted/optimized) tail calls are most typically used in functional languages to avoid stack blowups, and to allow recursive-function “loops” to be compiled down to the same constant-stack assembly that you'd typically get for an imperative language's while statement. Do we expect our debuggers to maintain “breadcrumbs” (as Andy Wingo puts it) for each back-edge in an imperative loop? If not, why do we expect those breadcrumbs for tail-call-optimized recursive loops? (One possible answer mentioned in the thread is that the expectations are different because the source is different. There really is a function call in the recursive version, and it would be counter-intuitive for our stack traces not to record all of the source-level function calls that are present, both in the static source and in the dynamic call stack.)

Laurie Tratt also links to an almost-20-year-old blog post discussing this, which itself has a link to an academic paper discussing these tradeoffs.

Tail call optimization [Laurie Tratt]

[Smith1995] Programming as an experience: The inspiration for Self