The Two Fundamental Elements of Computation: State and Transformation


I was recently having a conversation about how computers work. And it was in the course of that conversations that I understood quite how ignorant I’d been my whole life about the real fundamental elements of computation. I don’t think there’s a ton to be gained from writing an article on a programming site all about the details of voltages, capacitors, resistors, etc, but I do think the core lesson that came away from the conversation with is incredibly useful to keep in mind when thinking about programming.

Electricity Makes Computation Go

electricity-visualization

At the most basically level, you probably do know that electricity is what makes computers work. What I’d never fully appreciated before was that that is all that’s involved in making computers work. Everything just builds directly from that. Everything is just the subtle result of how electricity flows through an almost uncountable number of tiny circuits.

When you really get down to the heart of a CPU, it’s got only two things. Lots and lots of memory — in various levels of speed-of-access — and some circuits it can take things out of memory, mutate (through a series of “logic gates”) the thing removed from memory, and put the transformed value back in memory. Obviously the details of CPU design are not what I’m looking to cover here — heck, I barely know anything about — but this core insight that really that’s all that a “computer” is is what I want to take away.

The Two Fundamental Units: State and Behavior

usa-department-of-state-sealNow at the level of a CPU, I said that a our core behavior was “take things out of memory, mutate their state, and put them back.” For example, we can think about a binary adder that would take a value of 01, run it through an adder with another value — we’ll pick another 01 for simplicity — and return to memory the number 10. All the utility of the computer in our lives flows out of this sort of fundamental mechanism of basic processing of values while having the ability to access the result of the change.

The reason I’ve gone through this whole story about how simple and fundamental computation actually is is that it underscores a thing I’ve been learning about programming: there are only two things really going on — state and behavior. All we’re doing with programs is trying to compose these two fundamental elements into a shape that can perform useful work for people. It’s easy to start thinking that programming is about APIs or web technologies or how automated your build scripts are. But really programmers are just charged with transforming some state in the world — input data — in a precise way that allows other systems to use that state to perform useful work based on the output data.

The Conflation of State and Data in Programs

lamdaI’ve started to learn about functional programming in the last few years. All my learning has been quite casual, and I’ve never yet found a reason to do much more than play with the REPLs of functional languages and make some minor toy programs. But I’ve seen and learned enough to understand that the core appeal to most programmers about real pure functional programming is that you separate out as much as you can the state of your state from the data transformations it can transform.

For those even less aware of functional programming than me, a pure function is essentially something that takes some input and always returns the same output based on that input. This implies two core realities that aren’t true of most functions (or methods) in most computer programs in existence:

  1. A Pure Function Relies on Nothing But Its Inputs — The result of a pure function is determined only by the inputs. If you call add(2, 3) and you sometimes got 6 instead of 5, you’d rightfully be confused. But most functions in programs, either intentionally or not, grab ahold of something that wasn’t explicitly passed to them. Maybe it’s their containing class, or the global state space, or some framework-offered data. Any data used in calculation of the result that can change other than the inputs to the function makes the function hard to depend upon. It makes it so the function will sometimes, if these four seemingly unrelated states are in this specific configuration, make your entire computer explode (or just give a confounding, error-causing result).
  2. A Pure Function Touches Nothing But Its Output — Another of the ways we usually write impure functions is that we create methods that actually do some non-obvious state manipulation. You’re writing a function that transitions a user’s status from “Free User” to “Subscriber” and you have it send an email while it does so. Or maybe it charges a credit card, or even just saves the status change rather than just do the transformation. In all cases, that function is not “pure” in the mathematical sense.

Functional programming is nicer than most of the ways programmers are used to writing code because you can reason about it with much greater ease. When you know that a thing is taking an input and returning an output and doing nothing else, you don’t need to worry about something getting called one too many times and corrupting data. You don’t need to worry about the status of all the distant parts and states of the system, you just need to worry about making sure the inputs match that outputs.

State, Behavior, Functional Programming, and “Real” Programming

HaskellLogoStyPreview-1One of the harder problems in functional programming is where you hold mutable (changeable) state. Computers don’t work without changeable data, and humans get little use out of unrecorded data transformations. Pure functions don’t refer to or manipulate any state that exists outside of them. So how do you actually change and store data? Well this won’t be a tutorial about state monads (or burritos) which I believe is considered functional programmer’s answer to that problem.

But for the rest of us, using languages that aren’t built around the purity of functions, what does all this esoteric stuff mean? Well I think it means one big thing: think about when and how you’re combining state and behavior in a system. If you put computation — the transformation of values — in the same exact place that you manipulate state, you’re probably going to make something you (rightly) end up being afraid of.

The reason people who really dive into functional programming never want to come back to the world of mutable data structures is that when mutable data exists there’s a rather sizable probability that it’ll be mutated. And that causes headaches because you then have to worry about when and how pure computation and state manipulation worked together to create your headache. So when and where you can, skip the problem. Mutate data in a different place than you store it. You’ll probably still have frustrating bugs, but I’m willing to wager you’ll find some of them a lot easier to solve too.

Image Credits: brown_family_album, filterforge, donkeyhotey, tsoler_man

About David Hayes

David likes learning, solving hard problems, and teaching. He bikes a lot, and lives in (and loves) Colorado. You can find him on Twitter as @davidbhayes and check out his latest hobby-project, Quodid, a monument to his love for pithy bits of wisdom.

1 thought on “The Two Fundamental Elements of Computation: State and Transformation

  1. Pingback: The Two Fundamental Elements of Computation: State and Transformation | WPShout.com

Leave a Reply

Your email address will not be published. Required fields are marked *