Every useful process goes through at least one iteration of pulling data into scope, transforming the data, and pushing that data into another scope.1 Consider grep
: it reads a line from stdin
, examines the line to see if it should be forwarded to stdout
, and repeats until stdin
is exhausted. Conversely, cat
isn’t independently useful; it only exists to forward the filesystem into another process, which may simply be our terminal.
In almost every case, the transformation of a useful process is reductive; it emits less than it consumes. When a process has a larger volume of output than input, it’s generally providing a different (and ostensibly more useful) representation of its input. Examples of this include gunzip
and a program which, given a seed value, emits a neverending stream of pseudorandom numbers; both are only useful when placed upstream of another process.
For sufficiently small inputs, this might be useful on its own. We can uncompress a small gzipped file and read it in the terminal, or swap the delimiters in a small CSV file for HTML tags and look at it in the browser. But few datasets are intrinsically small; if we’re able to take in our data at a glance today, we shouldn’t expect to be so lucky tomorrow.
This reduction of our input data isn’t arbitrary; we are distilling it down to its meaning, in context. If we’re concerned on the errors in our logs, we lose nothing by filtering out everything else. If we’re concerned with the number of errors day-over-day, we lose nothing by reducing each log file down to a single number. This is the essence of interpretation, as defined within the field of semiotics: the transformation of one sign to a related sign, which often exists within a different system of signs.
Our colloquial understanding of interpretation captures most of this; we are assigning meaning to something. This process is subjective, and largely arbitrary; we are free to assign whatever meaning we like. There are often multiple useful interpretations, and we can consider them together to better understand what we’re interpreting.
Semiotics also tells us that each meaning is itself just another sign, which can be interpreted in turn. This idea, called unlimited semiosis in modern semiotics, was considered very unintuitive when it was first suggested in the 19th century by Charles Peirce. In the context of software, however, it’s self-evident: each function interprets its inputs, yielding a result that can be interpreted by other functions, and so on. There is no obvious or necessary end to this process; at some point we simply share our results with the outside world, allowing someone else to continue the chain of interpretation.
Meaning is created by combining software and data. Data, by itself, is inert. Software, by itself, is only a method for interpretation. Software interprets data, but the converse is also true. Different inputs will take different paths, and yield different meanings; our software is interpreted by the data it’s given.
This inverted perspective is useful when we look at how pieces of software are combined to create a whole. When one function calls another, it is interpreting the purpose of the function it calls. It is assigning meaning, in context.
Consider this expression, which removes all the bad values from a sequence of values:2
(remove bad? s)
The remove
function, on its own, is highly abstract; it treats all possible predicates and all possible sequences as interchangeable. By calling it with a specific predicate, we have made it more concrete. As we further compose upstream, and the meaning of s
becomes more defined, so too does the meaning of our expression. If we think (remove bad? ...)
lends itself to multiple interpretations, by multiple sequences, we might even give it a name:
(def good (partial remove bad?))
As we compose functions, our program’s interpretation of its input becomes increasingly abstract; by filtering out bad?
values, we treat any interleaving of those values as if they were the same.
Not every function, however, makes an equal contribution to this growing abstraction; remove
is just a different idiom for filter
:
(defn remove [predicate s]
(filter
(complement predicate)
s))
Likewise, complement
just negates a function’s return value, turning false
to true
and vice-versa. These functions do not assign meaning, they attempt to preserve it in a different form. Rather than interpreting, they are translating.
Of course, any semiotician will tell you that translation isn’t easy, and some meaning is invariably lost along the way. But these strata in our codebases, which attempt to translate rather than interpret, are qualitatively different; they are what we call “glue code.”
This, however, only defers the act of interpretation to a higher strata. Our code exists to be interpreted, either by data or by other code composed atop it. We leave room for interpretation using two fundamental mechanisms: function references and conditional execution.
Both of these mechanisms allow a function’s inputs to influence its execution. Consider this implementation of filter
:
(defn filter [predicate s]
(cond
(empty? s)
[]
(predicate (first s))
(cons
(first s)
(recur predicate (rest s))))
:else
(recur predicate (rest s)))
First we check if the sequence is empty; if so, we return an empty sequence. If it isn’t empty, we check if the first element satisfies the predicate
; if so, we prepend it onto the result of filtering the rest of the sequence. Otherwise, we omit the element from our result.
The first clause is simple conditional execution; given an empty sequence, filter
will short-circuit. The second clause is both a function reference and conditional execution; filter
will call whatever predicate
we’ve provided, and use the returned result to choose one of two branches.
But these are implementation details; the person calling filter
should be able to focus on its semantics, not the precise way it uses its inputs. As implementors, first we decide what sorts of interpretations our function will allow, and then we determine what minimal set of inputs will allow those interpretations.
In the above implementation, for instance, we don’t wrap each recursion in Clojure’s lazy-seq
macro, meaning the entire filtered sequence is eagerly computed. If we had made it lazy, that would allow for the code atop filter
to have more control over where and when each element of the inputs sequence is processed. Alternately, if we wanted to remain eager, we could have exposed a way for the code invoking filter
to control what sort of collection it constructed, rather than always returning a cons
-based list.
A broader range of possible interpretations isn’t necessarily better; simplicity has its own benefits. If we’re happy discarding our code tomorrow, we can focus entirely on our needs today. If we want our code to survive a bit longer, however, or even be a general-purpose library, we need to be more expansive.
In general-purpose code, we should rely on function references wherever possible. In addition to the first-class functions that are common in Clojure, these encompass any object instance with one or more associated methods.3. Function references provide an open mechanism for interpretation; to extend the set of possible interpretations, you just have to write your own function.
Conditional execution, conversely, is a closed mechanism; to extend the set of possible interpretations we need to edit a preexisting function. In general-purpose code, this is almost always driven by some sort of open classifier function, which reduces all possible values down to a finite set of categories. In Java, for instance, equals
, compareTo
, and hashCode
reduce their inputs down to 2, 3, and 232 categories, respectively.
In some cases, general-purpose code may also have conditional execution based on internal datatypes. The semantics of a red-black tree, for instance, are entirely driven by an externally provided ordering function, but its internal bookkeeping will have lots of if
s or match
es dealing with red and black nodes.
If your code has conditional execution based on public datatypes, however, it’s not meant to be general purpose; it’s what we call “business logic.” This is the upper strata of our application, interpreted by data rather than other code. Here, the closed nature of conditional execution is useful; we can understand the meaning of a given input by looking at a single point in our codebase. What endpoints does our API offer? Just look at the file where all of the routes are defined.
Almost anyone who writes software for a living can tell the difference between glue code, library code, and business logic. The first time we heard each term, it was fairly clear from context and the names themselves what was being described. If someone doesn’t understand, we assume they’ll figure it out soon, just like we did.
We may have a solid tacit understanding of these concepts, but would struggle to explain them, and how they relate to each other. Semiotics seems to provide a framework that can formalize and correlate this knowledge. Put another way, I believe every skilled software designer has an innate, mostly undeveloped talent for applied semiotics, and I’m curious what would happen if they tried to develop it.
Unfortunately, I don’t think simply reading a seminal text on semiotics will suffice; the interpretation done by software is vastly more reductive than that done by humans. When we use the idea of library to denote a book, or vice-versa, the amount of detail in our mental conception changes very little, if at all. In many contexts, they will have very similar connotations. But in a computer, a library-sized dataset is not an abstract symbol of knowledge; it’s a library’s worth of data, which is not at all the same as book’s worth of data. Not everything semiotics teaches us will be applicable in our day job.
So we’ll have to pick and choose, casting a wide net and holding onto the lessons that seem to resonate. To the extent that building software is about interpretation, we should study semiotics. To the extent it’s about naming, we should study analytic philosophy. To the extent that it’s reasoning about systems that are too large for any one person to understand, we should study sociology. We should learn whatever we can, and share it as widely as possible; even if computer science is a branch of mathematics, software design is not, and we should stop pretending otherwise.
-
For the purposes of this essay, a “process” is something which has execution isolation, at least partial data isolation, and is sequential. This encompasses the original UNIX processes, modern threads, Erlang’s processes, Smalltalk-72’s objects, and any number of other constructs. ↩
-
Using Clojure to explain general software concepts is a habit I’m still trying to break. Sorry. ↩
-
If you’ve never had a smug Lisp weenie tell you that “objects are just closures”, try spending more time online. ↩