Our Frame Problem

AdobeStock_280078402.jpeg

Human beings have always appealed to the technologies of the day to better understand themselves. Freud, for example, who came of age in a world of steam locomotives, produced a model of mind full of repressed energies and neurotic release valves. But ever since Alan Turing posed the immortal question “Can machines think?” in his 1950 paper “Computing Machinery and Intelligence,” our technology has overflown the safe confines of metaphor. In 1958, John Von Neumann’s posthumous book The Computer and the Brain speculated about the brain as a computing machine. By 1959, Herbert Simon and Alan Newell had developed General Problem Solver (GPS), a computer program that would help give rise to the field of cognitive science. It is by now taken for granted in much of psychology and neuroscience that to understand ourselves is, at least in part, to understand ourselves computationally.

Unlike the hydraulic pump, the model of mind as a computer doesn’t seem to be going out of date. AI systems inspired by the human brain are rapidly advancing, as are neuroprosthetics built on computational principles. One indication of the power of the model is that engineering failures in AI have provoked philosophical reflection on what it means to be human. The earliest AI systems were informed by a fundamentally Cartesian model of human beings as “thinking things,” and of thinking as the kind of higher-order logical inference of which we uniquely among animals are capable. But researchers soon discovered that such “high-level” reasoning is comparatively easy for a computer, while the “low-level” sensorimotor faculties we take for granted are far more difficult. As Steven Pinker put it in 1994, “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.”

Dubbed “Moravec’s Paradox,” after the roboticist Hans Moravec, the above observation reveals something curious about humans’ self-conception. Faced with the imperative to distinguish ourselves from other animals, we identified ourselves with our reasoning faculties. But now that (some) such faculties are easy for AI, animalistic traits like emotions and embodiment have become far more significant to our sense of self.  “The story of the twenty-first century,” writes Brian Christian, “will be...the story of Homo sapiens trying to stake a claim on shifting ground, flanked on both sides by beast and machine, pinned between meat and math.” Each automated job or vanquished Go champion shifts the ground further, perpetually unsettling our sense of ourselves and where we stand.

Although constrained forms of logical deduction are trivially easy for machines, the engineering paradigm of logic-based AI failed -- for reasons that, as we’ll see, still have something to teach us. One particular difficulty engineers encountered came to be known as “the frame problem.” From the Stanford Encyclopedia of Philosophy:

To most AI researchers, the frame problem is the challenge of representing the effects of action in logic without having to represent explicitly a large number of intuitively obvious non-effects. But to many philosophers, the AI researchers’ frame problem is suggestive of wider epistemological issues. Is it possible, in principle, to limit the scope of the reasoning required to derive the consequences of an action? And, more generally, how do we account for our apparent ability to make decisions on the basis only of what is relevant to an ongoing situation without having explicitly to consider all that is not relevant?

As the passage suggests, the philosophical essence of the frame problem is relevance. When acting in the world to achieve our goals, we somehow manage to focus only on relevant information, without having to explicitly consider and discard an infinite number of potentially useful but irrelevant facts. As I make my morning coffee, I do not need to consider and subsequently reject the possibility that an evil genius has filled my Keurig pods with poison. It is not logically impossible; it is simply irrelevant, and so does not occur to me. But a robot forced to logically deduce this conclusion would have to consider all such irrelevant facts -- and would never get to making the coffee.

The problem of relevance runs deep in cognitive science, touching areas as diverse as communication, problem-solving, and categorization. (How, for instance, can we know that two objects belong in a category together without considering the infinite number of traits they share that are irrelevant?) The cognitive scientist John Vervaeke has gone so far as to suggest that the capacity for relevance realization is the very essence of cognition, spanning from the lowest sensorimotor tasks to the highest expressions of wisdom. The extent to which we experience our lives as meaningful, Vervaeke argues, is centrally determined by our capacity for relevance realization at various scales. 

While this may sound like esoteric cognitive science or philosophy, we all now have an intimate sense of this problem from our daily lives. The world has always contained more information than we could process, but our age of information abundance has made this impossible to ignore. The problem we refer to as “information overload” is better conceived as a failure of our relevance realization machinery. We each face a variant of the philosopher Jerry Fodor’s conception of the frame problem: “Hamlet’s problem: when to stop thinking.” Our communication, too, has been deranged. On social media, the contexts we use to interpret communication proliferate endlessly -- and thus collapse. We find that we cannot make sense to one another and cannot agree about what’s important. These, too, are problems of relevance.

If we are going to find our way out of this mess, it will be the same way we found our way into it: in dialogue with our machines, and with the models of mind they inspire. A better understanding of relevance realization may or may not hold the key to building generally intelligent AI, but it will certainly help us live better in the world AI has created. We’ll make our way in the age of information abundance not by indulging our insatiable desire for more facts, but by skillfully learning what to ignore — and how to focus on what matters.