Classic works often advertise their lofty ambitions in their first sentence:

“Whether I shall turn out to be the hero of my own life, or whether that station will be held by anybody else, these pages must show.”

“Arms, and the man I sing, who, forc’d by fate, | And haughty Juno’s unrelenting hate, | Expell’d and exil’d, left the Trojan shore..”

So begin* David Copperfield* by Charles Dickens and Vergil’s *Aeneid*.

“The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication.”

So begins Claude Shannon’s magisterial “A Mathematical Theory of Communication.” He cites few other works, among them Nyquist’s *Certain Factors Affecting Telegraph Speed* and Kendal and Smith’s *Tables of Random Sampling Numbers* (which is exactly what its title suggests: a book of 100,000 random digits). Not, perhaps, signs of a clearly canonical work by an author speaking to enduring human questions. Indeed, it’s tempting to classify Shannon’s work as that of a technical specialist, worth examination by folks intent on building telegraphs or perhaps radios, but of no value to those of us who just make use of those delightful devices.

I want to suggest, though, that Shannon’s thought and work merit a place in the canon, and should be read by us and our posterity. Shannon’s ideas have changed not only the world—you are reading this on the product of his master’s thesis—but have changed the way we all think about thought, communication, and meaning.

In the first part I’ll offer some background and context and discuss Shannon’s Master’s Thesis. In the second part I’ll summarize his other work, and finally argue for why Shannon ought to be read as part of the canon.

Let me begin by placing Shannon in his context. He was born during the First World War and survived into the current millennium, if only just. He studied electrical engineering and mathematics at the University of Michigan before pursuing advanced work at MIT. During the war, he worked on fire-control and cryptography. He met Alan Turing and was impressed by the idea of the Turing Machine. After the war he published and taught at MIT. He was old enough to remember the world before electronic computers and lived long enough to see electronic computers take root as a foundational part of society.

Shannon’s 1938 Master’s Thesis, “A Symbolic Analysis of Relay and Switching Circuits” merits a quick summary. First, Shannon offers some motivation: as we build more complex electrical machines, we want to minimize the number of expensive parts in these machines. One way to reduce the cost of these machines is to convert particular configurations of electric switches and relays into configurations that do the same thing, but use fewer parts. This was done in a sort of hodge-podge way, and we had no way of knowing whether a particular configuration was optimal. Until Shannon’s work, it was an art rather than a science.

With that motivation in mind, Shannon defines a mathematical language—we would probably call it a vector space today—to describe switched electrical circuits. He assigns open switches (i.e. in the “off” position) a value of 1 and closed switches (i.e. in the “on” position) a value of 0. With those definitions in hand, he lays out a series of arithmetical and algebraic rules that apply to series and parallel circuits. (As an aside, Christmas lights have recently switched from being a series circuit, where one bulb out would darken the whole string, to parallel, where that is not the case; we are much the better for this.)

Leaning on good mathematical tradition, Shannon then proves a variety of properties of electrical switches that attentive readers may recognize as looking suspiciously like the properties of statements in formal logic. Indeed, Shannon notes that he has borrowed the notation and structure of his propositions from formal logic. This *in itself* is somewhat remarkable: the laws of logic, which govern ideas in minds, also govern the physical arrangement of electrical circuits.

Developing that analogical relationship, Shannon goes on to prove DeMorgan’s theorem for electrical circuits. Were this not enough to convince his professors to award him a degree, Shannon doubles down and offers the reader several examples of optimal circuits his analysis allows. Some are curiosities, like an electric combination lock. But the concluding circuit is worth some reflection.

The last section begins, “[a] circuit is to be designed that will automatically add two numbers, using only relays and switches.” Shannon goes on to describe a multi-bit binary adder. Each switch controls part of a binary number. 0010 might represent two, and 0011 might represent three. The circuit he describes would take those two inputs and produce 0101, which is five.

Indeed, following Shannon’s description, you could produce a device with switches on the front, where someone could toggle them to represent numbers, and have light bulbs for the corresponding sum light up. If you carefully combine lots of those little devices, you get a machine like the one that you are using right now to read this.

The whole thesis is but twenty five pages long. Certainly it sounds interesting if you enjoy building electrical circuits, but what does it have to say that speaks to our whole humanity?

First, we can make physical machines that undertake the same sorts of logical operations that minds undertake. This surprising ability points to a deep connection between the physical world and the world of ideas. We often separate mind from matter, and prize the one over the other. But in Shannon’s work we see a clear argument that intellect and matter are somehow linked. What we might think of as abstract Platonic ideas—the laws of logic—are embodied in particular physical machines as well. And so too our bodies and minds are intertwined.

Second, Shannon is planting his flag in an old debate: to what extent should we use external tools to improve the mind? In the *Phaedrus,* Socrates expressed skepticism about writing:

“This invention [i.e. writing], O king,” said Theuth, “will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.”

But Thamus replied, “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

We might read Shannon’s thesis as a parallel fable. As the written word externalizes words from the speaker, so too Shannon’s invention of the half-adder externalizes the operation of reason from the thinker. Like Theuth before him, Shannon’s invention promises to be an elixer of thought and knowledge. To judge Thamus’s critique I leave to my dear reader. After all, who am I to judge, writing this on a computer enjoying perhaps the same irony Plato enjoyed as he wrote the Phaedrus.

In the next part, we’ll look at Shannon’s *A Mathematical Theory of Communication*, which applies this same externalization to *communication*.