Why concatenative programming matters

Jon Purdy's account of why concatenative programming matters focuses on static types, which is an odd choice; it seems to me irrelevant to why these languages are interesting. (I suspect it's just mistitled; it could more accurately be called “Some interesting things about concatenative languages”.) If stack-based (“concatenative”) languages are interesting, it's not because they're especially amenable to static analysis or because their data flow incarnates a certain type system, but because of the expressive possibilities they demonstrate. In particular:

  1. Points-free style matters, because it makes code shorter. Many variables have uninformative names like x, and it loses nothing to leave them out. Even those with informative names are usually repeated more often than justified by their value as comments.
  2. ...but writing only in points-free style is a pain (even for Chuck Moore). So binding variables shouldn't be considered shameful, as it often is in Forth culture.
  3. ...but having lots of combinators available makes it much easier. Factor is less puzzle-like than Forth, partly because it has lambda (in the form of quotations) and plenty of combinators.
  4. Stackwise concatenation is not the only reasonable default composition operator. It has a wonderfully simple implementation and operational semantics, but it's hard to use in large expressions or with nonlinear dataflow. Lambda-calculus-based composition combinators like o* and h may be easier to use.
  5. Code need not have tree structure. The great success of expression languages has accustomed us to thinking that programs must be trees, but those in stack languages are (mostly) sequences. There is another way! (So what about dag and digraph structures?)
  6. Macros and dynamism work well in low-level languages. These two features are most common in high-level languages, but this is largely a historical accident. Forth happily allows redefining anything at runtime, and uses macros (in the form of compile-time words) for its control structures. Its users find both hugely convenient, and neither is a common source of problems. (Many assemblers also get a lot of power from macros, which is one of the reasons their users were loath to abandon them, but this lesson has been forgotten with their decline.) (This has nothing to do with concatenative languages — just Forth — but it's important enough to mention anyway.)

I suspect stack-based languages per se don't matter that much any more, but they illuminate dimensions of the language design space we wouldn't otherwise notice.


  1. Common subexpression elimination can be thought of as changing code trees into code dags.

  2. I'm finding much value in concatenative programming built upon John Hughes' arrow model. Essentially, the underlying data-structure is a tree instead of a stack, but the code itself is still point-free. This works especially well in combination with Huet zippers, which enables a concept of 'navigation' through the tree (i.e. a cursor-like model). And Huet zippers can be easily augmented with a user-model - e.g. 'hands' to carry things.

    Anyhow, there are several benefits to concatenative programming that Jon Purdy did not describe.

    If you have any interest in automatic code generation (e.g. via logical search, genetic programming), streaming code (e.g. for serializing values, keeping them up-to-date, or remote control of a system), pattern recognition (machine-learning, Markov models, predicting code in a stream, macro extraction, programming by example, automatic rewriting or refactoring). For such use-cases, it is very difficult to beat concatenative programming models.


It's OK to comment on old posts.