Is Facebook suffocating our ability for public discourse? Is YouTube degrading the morals of our children? These are the sorts of questions that our cultural conservatism inclines us to ask. We read Neil Postman’s Amusing Ourselves to Death, or Nicholas Carr’s The Shallows, and become acutely aware of the dangers of the digital tools we use every day. How do we rightly judge between good and bad uses of these machines? Like so many cultural debates, it’s a question of intentions and presuppositions.
As a software developer by trade, and a student of Western culture’s Great Books and traditions by desire, I sympathize with these fears and concerns as much as anyone. After spending too long on Twitter, or puzzling over a pesky bug at work, I’ve often wanted to give up on the whole affair, condemn the internet as a wasp-nest of nitwits, and retreat into a world where I can hole up comfortably with my pipe, Dostoevsky, and Moleskine notebooks, never to be seen again by the prying, shining fish-eyes of the machines. But this would not be entirely prudent.
Take a step back, for a moment, and consider: What questions are we asking of the machines to begin with? This idea, of asking questions of machines, might at first seem strange, but it is an integral step to using any tool. What is the tool for? How am I supposed to behave with it?
Joseph Weizenbaum, the great computer scientist and pioneer in artificial intelligence, addressed these questions in his prescient 1976 book, Computer Power and Human Reason. He noted that tools have a bias toward a particular action: They are created for a purpose. If I am found at the scene of a crime holding a smoking gun, presumably the observer at first glance will come to some rather particular conclusions about what I may have been doing in the moments leading up to my discovery.
But Weizenbaum goes further: The use of tools actively encourages the kind of behavior for which they are designed. Tools are a kind of language. Because a tool can only be used for certain things, and is manifestly inadequate for others, it dictates by its nature what sort of problems are fed through it, in the same way that the form of a language is often shaped (and is shaped in turn) by the kinds and qualities of the ideas expressed by means of it. Computers complicate determining their proper use by claiming to be a universal tool. Computers can be programmed to help with nearly every problem in existence—or so it seems.
Because of this claim computers make, not only do they vastly widen the scope of abilities imagined by the gullible human mind, but they also restrict the imagination soaked in such digital methods to only the sorts of problems which a computer is able to solve. Putting the situation this way, it is easy to call up one’s inner cynic and think of a dozen applications to which computers have been put that numb the mind to the finer things of life.
After all, Neil Postman has taught us well to think in terms not just of the barrage of information thrust continually at us, but also in terms of the medium by which it is carried. In his own piercing analysis, he argued that the very medium of TV encourages freneticism and promotes the exchange of carefully considered discourses for quick, contextless soundbites and commercial breaks. It would be naïve not to apply many of these criticisms to the infinitely scrolling feeds of flashy video thumbnails and hashtagged photos in the hands of millions of people today.
But rather than taking these criticisms and concluding that digital technology is inherently degrading, I propose instead a set of questions that we ought to be asking of the machines. What do they claim about themselves? Are they right or wrong? And do we use them in a way that is consistent with our answer to whether they are right or wrong in their claims? A small essay such as this is hardly sufficient to adequately engage with these questions, but I will hint at one example.
Take one of Apple’s many commercials for the iPhone X. The ad walks you through the plethora of new ways that the phone will transform your experience of the world: through its camera, face ID, and augmented reality features. At the end, shimmering purple text proclaims, “It’s never seen anything like you.” The iPhone is supposed to enhance the lived human experience. By living life with your iPhone, you will see the world more clearly, realize your identity more fully, become a better you. This is what the machine claims about itself.
Are these claims right or wrong? Does the iPhone in fact help to fully realize human nature, or does its bias tend to restrict the range of human activity? Whatever the answer, I urge that subsequent criticisms of the iPhone and the lifestyle it promotes—and any other piece of hardware or software we examine—be framed in terms of the questions we ask and expectations we have of the device in the first place.
As both a creator and a user of software, and as someone who is anxious to make digital technology more humane, I believe that the path forward starts with a careful and conservative evaluation of the claims of the devices and their makers—that they are universal tools capable of widening the human experience. Through consideration and modification of these claims, rather than a straightforward rejection of the tools themselves, we are well on the right road. Do not believe the machines.