Thoughts on ...

The date shown on this site reflects the last update time (yes, I do plan to keep updating this *v*). This blog post is more of a collection of scattered thoughts than a polished essay, so the wording might be a bit casual here and there.

2025.06.25

Puzzle Games and Figures

In the past half year, I played two puzzle games: Chants of Sennaar and The Witness. One thing they have in common is the need to spot patterns in visuals. I realized that this kind of pattern-spotting skill transfers well to reading papers. For example, when I come across a sentence like “As shown in Figure X/Table X…” I’ll skip the text and jump straight to the figure then I just guess. If I can come up with a coherent interpretation on my own, I’ve often already grasped the gist of it. If some parts are still fuzzy, I’ll go back and skim the related explanation.

2025.06.19

CPS: A Story About Promises

Here are some references I found helpful:

  1. CMU CS15-150: Lecture 11 Continuation-Passing Style | CMU Principles of Functional Programming M23Instructor: Bradon Wu; Language: SML. Overview: CPS intuition, how to mechanically transform recursive functions into CPS, with some fun classroom interactions.
  2. Cornell CS6110 Lecture 13 Continuation-passing style, Language: (Pure) Untyped Lambda Calculus. Overview: A basic introduction to CPS transformation for untyped lambda terms (variable, abstraction and application).
  3. Harvard CS152 Lecture 11 References; Continuation passing style,Language: (Applied) Untyped Lambda Calculus,Overview: Builds on the basic LC with a few additional constructs, which I found easier to grasp.
  4. SICP §4.3 Variations on a Scheme - Nondeterministic Computing,Language: Scheme,Overview: It doesn’t explicitly mention CPS, but the core is CPS in disguise. You implement amb to explore nondeterminism—super cool to try out in Scheme.
  5. POPL'25 - The Duality of λ-Abstraction,Language: Simply Typed Lambda Calculus,Overview: A new paper (2025!) that got me back into CPS. It presents a typed lambda calculus extended with a co-abstraction operator, and explains it using CPS.
  • If you think of CPS transformation as a kind of translation from source to target language, then #2, #3, and #5 represent increasingly complex source languages. #5 is especially tricky because it introduces co-abstraction—but being typed helps, since type annotations can often guide your thinking. If you’re curious, just search “Continuation-Passing Style note” on Google—there are lots of good lecture notes from top universities out there. Pick one that makes sense to you.

So, what is CPS (Continuation Passing Style)? It’s a programming style that makes it possible to transform all recursive functions into tail-recursive ones. This means it can be used for compiler optimizations (tail call elimination). But CPS transformation isn’t super intuitive—even though it’s built entirely from lambda calculus and currying, with no fancy new concepts. And that’s kind of the point: using existing tools to express a new way of thinking is where the challenge lies.

Here’s how I like to think of CPS: it’s like role-playing. Suppose I’m a lambda term $e$, and someone’s performing a CPS transform on me. What do I do? I receive a continuation $k$ and promise to evaluate $e$ and then apply $k$ to the result. Like recursion, you can explain CPS step-by-step using base cases and inductive steps—but sometimes you just need a little recursion leap of faith.

Once you have that leap, you still need to work through the details: (1)What counts as a value? In pure lambda calculus, it’s usually variables and abstractions.(2) How do you evaluate expressions like applications ($e_1 e_2$) under CPS?

Here’s the standard CPS transformation:

$$[x] = \lambda k. k\ x$$ $$[\lambda x. e] = \lambda k.\ k\ (\lambda a.\ \lambda k_B. [e]\ k_B)$$ $$[e_1\ e_2] = \lambda k.\ [e_2] (\lambda v.\ [e_1] (\lambda f. f\ v\ k)) $$

“So… what’s the cost?” – There’s No Free Lunch.

When you’re reading about some cool new method “M” in a paper, the benefits are usually front and center. But if you’re left with a fuzzy feeling of “what’s the catch?”, that’s a good instinct. Every gain usually involves a trade-off—though authors often skip over those parts. Asking “what’s the cost?” can open up new perspectives.

Some examples:

  • LET (Logical Execution Time): A programming model used in real-time systems. It restricts input to the logical start point $S$ and allows output only after the logical end $E$, making the system more predictable. Since we don’t care how much time the task actually takes between $S$ and $E$ (assuming instantaneous I/O), it eliminates I/O jitter—especially useful for periodic tasks. But predictability comes at a price: time overhead. You have to over-provision the logical interval to be at least the task’s worst-case execution time ($WCET$), i.e., $E - S \ge WCET$.

  • Abstraction vs. Performance: A friend was recently complaining about Python’s leaky abstractions: “High-level languages hide pointers, but we still have to know about value vs. reference types—it’s all just pointers in disguise!” And yet, you can’t manipulate memory directly. In a perfect abstraction, the underlying hardware would be totally invisible. But that’s not always realistic. We ask beginners “Do you know what a pointer is?” which is weird if they’re learning Python. In assembly, that question makes sense. Haskell hides pointers completely—that’s high-level. C and Rust give you direct pointer access—so maybe they’re mid-level languages (well, systems programming languages). And Python? It’s like “mid-high-level”: looks like there are no pointers, but they’re definitely there. Back to trade-offs: if you abstract away the hardware, you gain safety—but lose fine-grained control, which hurts performance. Rust’s zero-cost abstractions sound ideal—they give you both safety and performance. But what’s the cost? A more complex compiler. You can’t get something for nothing!

Licensed under CC BY-NC-SA 4.0
Contact: hnu_yuxin@hnu.edu.cn
Built with Hugo
Theme Stack designed by Jimmy