Monday, August 22, 2016

The Unpredictability of the Future of “Intelligence”


The following short essay is the penultimate chapter of my recently published book Reflections on Intelligence.


Much to learn you still have.”
— Yoda


A crucial skill for complex goal achieving of any kind is the ability to model and predict the future. This is a simple fact, yet it reveals a significant way in which any goal achieving system is bound to be limited, since predicting the future to great precision is impossible. This we know for various reasons, an obvious one being that there simply is not enough information in the present to represent all the salient details of the future. Any model of, say, the future of civilization has to be contained in a much shorter time and space than the unfolding of that civilization, and hence must leave out much information.1 Therefore, models of the future of civilization are bound to contain much uncertainty, and the deeper in time we try to peer, the greater this uncertainty gets. This same point applies to any agent: no agent can model its own future path well, and therefore must be deeply uncertain about how it will act in the future.


We can see the same conclusion by keeping in mind what agents, including civilizations, in fact do: they continually seek out new information and update their worldview and plans of action based on this. This means that, in order for an agent to predict its own future actions, the agent must know its future discoveries and updates before it makes them, which is obviously impossible. This process of discovering and updating is inherently unpredictable to the system itself. And this conclusion of course applies to any such system that will ever emerge. No agent can confidently predict its own future actions.2

One can simply never know in advance what the next advanced detector is going to show, what ten times greater computing power will reveal, or what exotic experiences a novel psychedelic empathogen might induce, and therefore one cannot predict the conclusions that might follow from such discoveries.3 Even if one has a rough idea about what the next big discovery might be and what implications that are likely to follow, there is still going to be some uncertainty about it, and this uncertainty accumulates quickly when we go further down the constantly branching tree of possible discoveries and updates we could make. Again, the deeper into the future we look, the more ignorant we are about the outcome, specifically about the discoveries and conclusions that will have been made at a given point.4


The fact that future goal seeking systems will never be able to predict even their own actions is worth keeping in mind, one reason being that it removes such systems from the pedestal of near-omniscience that they are so often placed on. It makes clear that there will not be some point of discontinuity, no “knowledge singularity”, after which virtually everything will be known. Advanced goal seeking systems will always keep on trying to make sense of the world with enormous uncertainty about the future, and in this respect, they will always resemble us, as we are today. More than that, the fact that future agents far more advanced and knowledgeable than us will have great ignorance about their own future also reveals how naive it is to think that we, when staring into the deep future, should be anything less than profoundly ignorant. We are. Unavoidably so.


1 Indeed, this Russian doll problem of a system’s understanding of itself implies that no system can ever fully understand even its present function, as that would imply that the system must have a self-model that contains itself. That is, understanding oneself fully would require an infinite regress of meta self models – a model of the self model of the self model, etc. The bottom line: not only can no system reliably predict its own future, there will also be relevant aspects of its own present function that it will inevitably fail to understand.
2 This conclusion holds even if a system could spend all its resources trying to predict the future, yet it should of course be remembered that agents have other tasks they must devote resources to besides modeling the future, including the many tasks necessary for the maintenance and expansion of the system.
3 After all, unexpected and extraordinary discoveries have been made before, in subjects ranging from fundamental physics to how security agencies function in practice, and these have often changed not only our outlook, but also our actions in significant ways.
4 ”But what if there are no new discoveries and updates to be made?” The claim that there are no new discoveries to be made in the future is itself – assuming it makes sense in the first place – an uncertain claim about the future of discoveries and updates. In other words, we can never confidently assert that there are no new extraordinary discoveries around the corner. Yet one can question whether the claim is at all meaningful and free of contradiction in the first place, because the question about how a continual lack of new discoveries would be handled is itself an open question, the settlement of which would also involve a continual process of discovery and updating. The absence of a discovery is itself a discovery of sorts.

Thursday, August 4, 2016

New Book: "Reflections on Intelligence"



A lot of people are talking about “superintelligent AI” these days. But what are they talking about? Indeed, what is “intelligence” in the first place? I think this is a timely question, as it is generally left unanswered, even unasked, in discussions about the perils and promises of artificial intelligence, which tends to make these discussions more confusing than enlightening. More clarity and skepticism about this term “intelligence” is desperately needed. Hence this book.

Free Download: