Forecasting (with) AI

forcasting.jpeg

JUNE 27, 2019

When will AI exceed human performance across all tasks? What is the likelihood of a technology-induced catastrophe in the next century? How rapidly will automation spread? Which abilities will be automated first? Such questions are of vital importance, not only for the future of humanity, but for scientists attempting to anticipate and prepare for that future. One of the Institute’s primary goals to help research on technology’s effects become less reactive, and accurately forecasting the changes coming down the pipeline is a key part of that mission. But technological progress is notoriously difficult to predict, and precise predictions about the growth of knowledge may be impossible in principle.

Fortunately, there now exists a wide literature on forecasting practices, thanks primarily to the work of psychologists Philip Tetlock and Barbara Mellers. Tetlock outlined some of these findings at this past weekend’s Effective Altruism Global conference, where attendees had a particular interest in forecasting risks from artificial intelligence. In the political sphere, he explained, expert judgment is correct only “about 65-70% of time” when made with 85% confidence, and the average expert performs little better than “an attentive reader of the New York Times.” Another key finding is that generalists with broad knowledge tend to outperform specialists, a fact that seems to be increasingly relevant in the modern economy. As two recent Atlantic articles note, automation is taking over narrow, repeatable tasks, making generalist skills (like forecasting) all the more valuable. 

But what about highly technical domains, such as AI research? Successfully conducting such research depends on specialized skills, and one might suspect that forecasting progress in the field demands the same. But as Anthony Aguirre and Andrew Critch point out in an interview with the Future of Life Institute, expertise in a given domain does not immediately translate to forecasting ability. Through his prediction platform Metaculus, Aguirre has found that “there are people that know a tremendous amount about a subject and just are terrible at making predictions about it,” while others with forecasting practice “are just much, much better at it.” This is especially true when it comes to risks; as Critch notes, “not everyone who’s an expert...in nuclear engineering or artificial intelligence is an expert in reasoning about human extinction. You have to be careful about who you call an expert.”

Critch’s claim is borne out by surveys of AI experts, which show wide differences of opinion, even with differential weighting. Some experts doubt that superhuman intelligence is even possible, while others see it on the horizon within decades. Surveys also yield different results depending on question framing: in one study, experts predict we will achieve “human level machine intelligence,” defined as a time when “unaided machines can accomplish every task better and more cheaply than human workers,” far sooner than we will arrive at a time when “all occupations are fully automatable.” Given that another survey suggests as much as a 31% chance that this development will be “bad” or “extremely bad” for humanity, improving our forecasts seems like the very least we should do.

Might AI itself help with our predictions? Perhaps, in some domains. Tetlock and Paul J.H. Shoemaker explain the comparative advantages of machine and human prediction in a useful MIT Sloan Review article. Where novelty is high and data is scarce, they argue, human prediction prevails; where novelty is low and data is abundant, machines triumph. But in cases with high novelty and lots of data or low novelty and little data, human-machine collaboration performs best. As Psych of Tech Institute advisor Jim Guszcza notes, “the equation should be not ‘algorithms > experts’ but instead, ‘experts + algorithms > experts.'” For the time being, the uniquely human capacity to understand the world still holds value for prediction. But there is no guarantee this will last. Before technologies help us transcend the limits to our understanding, warns Santa Fe Institute president David Krakauer, “technologies [may] make understanding irrelevant.”

Nathanael Fast