Friday, September 23, 2016

The Tree of Ought — A (Cause) Prioritization Framework


Imagine a couple that tries to make a decision about how to set the table at their wedding. They spend all their time trying to work out this difficult decision, making lists and drawings, and asking Google and friends for advice. Yet underneath their thoughts pertaining to the wedding table, a deeper doubt is lingering in their minds: whether they really want to marry each other in the first place. Unfortunately, they have not spent sufficient time contemplating this more fundamental question, and yet what occupies their attention is still the wedding table.


This is clearly unreasonable. Whether it makes sense to spend time on setting the wedding table depends on whether the wedding is sensible in the first place, and therefore the latter is clearly the most important question to contemplate and answer first. Two weeks after the wedding, a divorce is filed. It was all a waste, one that deeper reflection could have prevented.


This example may seem a little weird, yet I think it captures what most of us do most of the time to a striking extent. We all spend significant amounts of energy on planning and executing ill-considered “weddings.” Rather than considering the fundamental questions whose answers determine the sensibility of more specific tasks, we get caught up in ill-considered specific tasks that happen to feel important or interesting.


This is hardly a great mystery when considered from an evolutionary perspective: doing whatever felt most interesting at any given time probably made a lot of sense in our ancestral environment, and no doubt still does much of the time — ignoring every moderately interesting thing that jumps into consciousness is not a recipe for success in today’s world either. The key is balance, of course, yet I believe we are entirely out of balance for the most part, unfortunately. Too often, our focus is guided by a sense of “uh, this seems interesting” — crudely speaking, a dopamine hit — rather than pre-frontally guided considerations about what objectives that are the most reasonable to pursue. When it comes to what we should be doing, we have a huge unrecognized “uh, this seems interesting”-bias, a bias that makes us lose touch with the importance of thinking hierarchically about what we should be doing.


For that is exactly the point that the example above illustrates: we should contemplate the fundamental questions and decisions before we move on to the more specific ones. This is simply the only thing that makes sense, as the answers to fundamental questions are what determine which specific tasks that make sense to pursue in the first place. In short, the specifics are contingent on the fundamentals. And this has significant implications: we need to pay much more attention to the fundamentals.


This is what the “tree of ought” illustrated below is all about. It is a framework for making decisions that emphasizes first things first, while highlighting that “first things first” is best thought of in hierarchical terms.


At the bottom of this tree we have our fundamental values upon which everything else rests and depends — the root and stem of the tree, one could say. From this, something slightly more specific follows, namely the causes we should pursue given our values, the branches of the tree. Finally, on these branches, we find something more specific still, namely interventions that enable us to attain success in our cause area — the leaves of the tree, if you will.


One could of course construct this hierarchical tree with any number of levels, but I find this three-level “value—cause—intervention” division useful, at least for starters. As one moves on, for instance to specific interventions, the tree will keep on getting divided further, as it will then again be useful to think in hierarchical terms. For any specific goal to be achieved, it will always be the case that some tasks and questions are of a more fundamental character than others, and hence more important to solve first.


An illustration of this three-level tree might look like this (there can obviously be any number of causes):




So, at the most general level, this “tree of ought” asks us to consider three questions, in the following order:


1) What are our fundamental values? (Phrased in realist terms: what matters?)


2) What specific causes should we pursue given our fundamental values?


3) What interventions should we pursue given our specific causes?



I think this is an extremely valuable set of questions, not least due to their ordering: it is clear that our answers to question 3) depend on our answers to question 2), which again depend on our answers to question 1) — or, more reverentially, Question One.


Hence, the tree of ought suggests a rather counter-intuitive idea that does not seem shared by many, namely that contemplating fundamental values, i.e. Question One, should be our first priority. I think this is largely correct, at least if we do not have a highly qualified answer in place already. Our fundamental values can fairly be thought of as the point of departure that determines our forward direction, and if we take off in just a slightly sub-optimal direction and keep on moving, we might well end up far away from where we should ideally have gone. In other words, being a little wrong about fundamentals can easily lead us to being extremely wrong at the level of the specifics, which is why it is worth spending a lot of resources on being extremely well-considered about the fundamentals.


So contrary to what we may naively assume, the tree of ought suggests that the question concerning
fundamental values is not an irrelevant, purely theoretical question that prevents us from doing something useful. Rather, it is the question that determines what is useful in the first place. And answering it is far from trivial.

Wednesday, September 7, 2016

Cause Prioritization



"Cause prioritization is the most effective use of altruistic resources."



People who want to improve the world are, like everybody else, extremely biased. A prime example is that we tend to work on whatever cause we have stumbled upon so far and to suppose, without deeper examination, that this cause is the most important one of all. This cannot be safely assumed, however.

Here’s what a typical path of “cause updating” might look like: We find out that thousands of people die every single day due to extreme poverty, and find that to be the most important cause to work on. Then we realize that humanity torments and kills billions of non-human beings every year, and that discrimination against these beings cannot be justified, which might then prompt us to focus on ending this moral catastrophe. Then we are told about the suffering of wild animals, its enormous scope, and why we ignore it, and then we might (also) work on that. Then we are convinced by arguments about the enormous importance of the far future, and then that becomes our main focus. Etcetera.

To be sure, such an evolutionary progression is a good thing. The question is just whether we can optimize it. Might we be able to undertake this process of updating in a more direct and systematic fashion? After all, having undergone a continual process of updating that has made us realize that we were wrong about, and perhaps even completely unaware of, the most pressing causes in the past, it seems reasonable to assume that we are likely still wrong in significant ways today. We should be open to the possibility that the cause we are working on presently is in fact not the most important one we could be working on.


Cause prioritization is the direct and systematic attempt to become more qualified about which causes we should prioritize the most. And the importance of such a deliberate effort should be apparent: Working on the cause(s) where we can have the best impact is obviously of great importance — it means that we can potentially help many more sentient beings — and in order to find that cause, or set of causes, deliberately seeking for it does seem significantly more efficient than expecting to stumble upon it by chance without looking. Defying the seductive pull of optimizing specific tasks that further a given cause, cause prioritization goes a step meta and asks: given our values, what causes are most important to focus on in the first place?


I hope to explore this question in future essays. I wish to provide a rough framework for how we can think about cause prioritization, and based on this, I will try to point to important causes and questions that I think we should focus on and explore further.

Monday, August 22, 2016

The Unpredictability of the Future of “Intelligence”


The following short essay is the penultimate chapter of my recently published book Reflections on Intelligence.


Much to learn you still have.”
— Yoda


A crucial skill for complex goal achieving of any kind is the ability to model and predict the future. This is a simple fact, yet it reveals a significant way in which any goal achieving system is bound to be limited, since predicting the future to great precision is impossible. This we know for various reasons, an obvious one being that there simply is not enough information in the present to represent all the salient details of the future. Any model of, say, the future of civilization has to be contained in a much shorter time and space than the unfolding of that civilization, and hence must leave out much information.1 Therefore, models of the future of civilization are bound to contain much uncertainty, and the deeper in time we try to peer, the greater this uncertainty gets. This same point applies to any agent: no agent can model its own future path well, and therefore must be deeply uncertain about how it will act in the future.


We can see the same conclusion by keeping in mind what agents, including civilizations, in fact do: they continually seek out new information and update their worldview and plans of action based on this. This means that, in order for an agent to predict its own future actions, the agent must know its future discoveries and updates before it makes them, which is obviously impossible. This process of discovering and updating is inherently unpredictable to the system itself. And this conclusion of course applies to any such system that will ever emerge. No agent can confidently predict its own future actions.2

One can simply never know in advance what the next advanced detector is going to show, what ten times greater computing power will reveal, or what exotic experiences a novel psychedelic empathogen might induce, and therefore one cannot predict the conclusions that might follow from such discoveries.3 Even if one has a rough idea about what the next big discovery might be and what implications that are likely to follow, there is still going to be some uncertainty about it, and this uncertainty accumulates quickly when we go further down the constantly branching tree of possible discoveries and updates we could make. Again, the deeper into the future we look, the more ignorant we are about the outcome, specifically about the discoveries and conclusions that will have been made at a given point.4


The fact that future goal seeking systems will never be able to predict even their own actions is worth keeping in mind, one reason being that it removes such systems from the pedestal of near-omniscience that they are so often placed on. It makes clear that there will not be some point of discontinuity, no “knowledge singularity”, after which virtually everything will be known. Advanced goal seeking systems will always keep on trying to make sense of the world with enormous uncertainty about the future, and in this respect, they will always resemble us, as we are today. More than that, the fact that future agents far more advanced and knowledgeable than us will have great ignorance about their own future also reveals how naive it is to think that we, when staring into the deep future, should be anything less than profoundly ignorant. We are. Unavoidably so.


1 Indeed, this Russian doll problem of a system’s understanding of itself implies that no system can ever fully understand even its present function, as that would imply that the system must have a self-model that contains itself. That is, understanding oneself fully would require an infinite regress of meta self models – a model of the self model of the self model, etc. The bottom line: not only can no system reliably predict its own future, there will also be relevant aspects of its own present function that it will inevitably fail to understand.
2 This conclusion holds even if a system could spend all its resources trying to predict the future, yet it should of course be remembered that agents have other tasks they must devote resources to besides modeling the future, including the many tasks necessary for the maintenance and expansion of the system.
3 After all, unexpected and extraordinary discoveries have been made before, in subjects ranging from fundamental physics to how security agencies function in practice, and these have often changed not only our outlook, but also our actions in significant ways.
4 ”But what if there are no new discoveries and updates to be made?” The claim that there are no new discoveries to be made in the future is itself – assuming it makes sense in the first place – an uncertain claim about the future of discoveries and updates. In other words, we can never confidently assert that there are no new extraordinary discoveries around the corner. Yet one can question whether the claim is at all meaningful and free of contradiction in the first place, because the question about how a continual lack of new discoveries would be handled is itself an open question, the settlement of which would also involve a continual process of discovery and updating. The absence of a discovery is itself a discovery of sorts.

Thursday, August 4, 2016

New Book: "Reflections on Intelligence"



A lot of people are talking about “superintelligent AI” these days. But what are they talking about? Indeed, what is “intelligence” in the first place? I think this is a timely question, as it is generally left unanswered, even unasked, in discussions about the perils and promises of artificial intelligence, which tends to make these discussions more confusing than enlightening. More clarity and skepticism about this term “intelligence” is desperately needed. Hence this book.

Free Download:

Thursday, August 13, 2015

My Disagreements with Brian Tomasik



This post is inspired by good advice from Brian Tomasik himself, namely to make conversations public.1 I have some significant disagreements with Brian's views, many of them about the most fundamental and important questions about the world, and since these questions are so important, and because Brian is an influential guy, I thought airing these disagreements openly might be worthwhile. So here we go.


Consciousness
Many of my disagreements with Brian have common roots, and the core root is our disagreement about the nature of consciousness. Brian denies that consciousness exists. To say that this strikes me as confused would be an understatement. But not only does Brian deny consciousness, he also seems to embrace a strangely postmodernist view of it, namely that it's ultimately up to us to decide whether some process is, in Brian's words, “what we call conscious” or not. For instance, when asked about whether he thought that a given kind of computer was conscious, Brian responded: "I personally wouldn't call it conscious, although it's up to you where you want to draw the line."(see: https://youtu.be/_VCb9sk6CTc?t=1h4m).


“It's up to you where you draw the line”? A similar quote: “We can interpret any piece of matter as being conscious if we want to, […]”


So Brian clearly views consciousness as something that is entirely up for interpretation. What this implies is that it is perfectly valid to draw the line at ourselves, and then “decide” that solipsism is true. Or to draw the line at humans – or Caucasian humans for that matter – and say that only we are conscious.
I reject this constructivist view entirely, although I can see that it goes well together with the view that consciousness does not exist – if we can just make it up, if it, like beauty, only exists in the eye of the beholder, except the analogy to beauty actually makes no sense in this framework, since the beholder is the very thing that is being claimed not to exist.


I maintain that consciousness is a real phenomenon. It is not something that can be “decided” in or out of existence, anymore than the moon is. The conscious experience of someone else is a phenomenon that arises in their head, and whatever interpretation that goes on in our head does not change that. Indeed, we cannot even interpret our own consciousness away. The truth is that, ultimately, consciousness is all we know. All we ever experience are appearances in consciousness, consciousness bent and shaped so as to represent the world and guide our actions in it.


Usually we just observe the world as “the world” rather than as “the world in consciousness” – much like when we watch a movie as “what is happening” rather than “what is happening on the screen” – and from that perspective consciousness can easily be thought of as something that is not real. Yet upon a closer observation of consciousness, the naivety of this naive realism becomes clear, along with the realization that it is naive realism that is the clever illusion created by our brain – a cleverly manufactured movie appearing on a screen that we almost never notice, and whose reality some even deny.


Is Brian simply missing the screen? I don't know what it's like to be Brian, but I suspect he might be. He might even claim that there is no screen, only “information processing”, and that consciousness is all a user created illusion. But this is a claim that is derived entirely from consciousness in the first place. Without consciousness, we could not know about "information processing", or anything else, in the first place.


So what is consciousness? If I claim it is real, I must at least be able to say what it is.
I must admit the question strikes me as absurd, because consciousness really is all we ever know. Anyone who wants to understand what consciousness is – i.e. understand its experiential nature, not its physical basis – is being confused. For consciousness is that within which all understanding and interpretation takes place. Hence for anyone who understands the question, “what is consciousness?”, the answer will always just be “this!”(what you are experiencing right now).


It does of course make sense to ask what the physical basis of consciousness is, but that is another question – and one that requires we admit of the reality of both the physical and the experiential in the first place (I have argued that this is what one could call the real problem of consciousness). But when it comes to trying to understand the experiential nature of consciousness, or trying to define it, what we have is a senseless effort. All one has to do is look, not ask.


That we cannot define consciousness is often claimed to constitute a difficulty for those of us who say it's real. But this is not the case. Consider space by analogy. We cannot define space either. All we can do in an effort to define space is to refer to related notions that end up being more or less synonymous: extension, dimensions etc, all of which rest on an understanding of space in the first place. Yet does that make it reasonable to doubt the existence of space, and to assert that our brains merely tell us that space exists? Hardly. The same goes for consciousness, the existence of which, I would argue, stands as even more certain than the existence of space. After all, the only evidence we have of the existence of space is appearances in consciousness. This is not to say that consciousness is more fundamental than space in an ontological sense, but it is in an epistemological sense. When it comes to our knowledge, consciousness is that space in which all appearances appear.


Brian writes that consciousness is “a label we apply to a collection of processes.” I think this expression captures well what Brian in my view gets wrong: consciousness is not a mere label “we” apply to a collection of processes; consciousness is the “we” in which all labels appear. And yes, that “we” surely has a physical basis, a collection of processes, but that does not make consciousness unreal or a mere label.


I'm not at all hopeful that any of what I have written above will change Brian's mind the least about consciousness, as I'm sure he has encountered something like it countless times. In fact, I don't think anything I can say will change Brian's mind. As far as I am concerned, denying that consciousness exists is like denying that physical space exists. What argument can you provide for the existence of physical space to someone who says it does not exist? (And, again, I would say that the existence of consciousness is even more primary and less disputable than the existence of space.) Indeed, in a statement made by Brian in a recent interview, he made clear that the only thing that would convince him about the reality of consciousness would be an argument. And here lies the problem: no argument can possibly convey the reality of consciousness. Only experience can. (As is also true of the reality of space.)
Yet even though Brian most likely will not change his view, the criticism above is still worth putting forth, not least because it provides the background for most of my other disagreements with Brian.


Consciousness and Computation
On the one hand, Brian claims that consciousness doesn't exist, yet on the other hand he seems extremely certain that consciousness is the product of computation, the product of algorithms that could be implemented on my laptop.
First of all, this strikes me as incoherent. If you don't believe consciousness exists, how can you hold that consciousness is the product of computation? It seems inconsistent to deny the existence of a phenomenon, and then to try to explain it at the same time.


Furthermore, if I were to accept Brian's constructivist view, the view that we can simply decide where to draw the line about who or what is conscious or not, I guess I could simply reject his claim that flavors of consciousness are flavors of computation – that is not how I choose to view consciousness, and hence that is not what consciousness is on my interpretation. That “decision” of mine would be perfectly valid on Brian's account, which reveals another way in which Brian's own views seem to be in obvious conflict.2


But if we admit that consciousness does exist, we can proceed to discuss its basis. Is it indeed the product of computation, a certain kind of algorithm? My view is: I don't know, and I would say that no one else does either. We do not know exactly what the material basis of consciousness is, and to make strong claims about it seems to me unjustified.


Many people assume that consciousness is just the product of computation, but this truly is nothing but an assumption, and one that leaves many mysteries unanswered. There are also many smart people who doubt that consciousness is merely a matter of computation. There is David Pearce, for instance, who conjectures that consciousness is the product of quantum coherence in the brain (a conjecture he himself calls bizarre, but it is a testable hypothesis). There is Terrence Deacon, who believes consciousness emerges from complex dynamical systems that are self-constraining, and that it is something that cannot be implemented on my laptop. There is Youtube anti-natalist Inmendham, who has argued that it is a matter of a certain hardware. To simplify his argument: just like a computer will not have Wi-Fi if it does not have the hardware that supports it – regardless of what software we run on it – a computer will not be conscious if it does not have the right hardware. There is also Jaron Lanier, certainly no stranger to computer science, who does not think consciousness is computational in nature either, and who questions the meaning of computation in the first place.


So is sentience inherently “dynamical” – arising from dynamical constraints – as Deacon argues? Or is it inherently quantum mechanical, as Pearce argues? Does it require a certain kind of hardware, a certain kind of “material” or physical structure, as Inmendham argues? Are all these conditions necessary? It is not easy to exclude any of these possibilities, and just one of them would need to be right in order for the “consciousness is a certain kind of algorithm”-view to be wrong.


Again, I'm not taking any strong stance here, and I don't think anyone is justified in doing so. And that is my disagreement with Brian in this regard: his certainty that flavors of computation are flavors of consciousness seems unjustified. Agnosticism seems to me the only defensible position here.


Moral Realism

My disagreement with Brian about moral realism flows right out of our disagreement about consciousness. I can see that if you can just decide whether someone else, our even yourself, is conscious or not, and hence decide whether they have an experience that is really bad or not, then you can also decide your own oughts, and whether you want to have any at all. I think this is wrong, factually wrong.


It all springs from my simple, and in my view unavoidable, concession that consciousness is real, and so are the various aspects of consciousness we identify as suffering and happiness. And in these aspects, oughts are inherently inscribed – a proposition I would claim one can only deny in the absence of (intense forms) of either. These experiences are the facts about the world that inherently dictate oughts (this is the very simple version; my argument for moral realism is found in my book Moral Truths, so I shall not repeat it here)


So suffering and its inherent badness is, I maintain, a fact about consciousness, and this is not a made-up value statement, anymore than the assertion that the moon exists is a made-up value statement and something we could decide to change. We cannot just decide that suffering is not bad. That is, in a nutshell, moral realism as I and many others defend it.


Brian writes:


Moral realists don't want to acknowledge that moral feelings are mechanical impulses of material organisms and so postulate some additional "objective truth" property of the universe that somehow bears on morality.

This is simply not true. I need make no extra postulate than that suffering really exists and matters, and I fully acknowledge its physical basis. To turn Brian's claim on its head: it is the moral anti-realists who fail to acknowledge the simple and obvious truth that suffering really, truly matters.


Brian Continues:

Like with a dualistic soul, this objective moral truth violates Occam's razor, and beyond that, it's unexplained why we would want to care about what the objective truth was. What if the objective truth commanded us to kick squirrels just to cause them pain?

Elsewhere, Brian writes that moral realism is "some mysterious thing." But why does he assume that? Brian consistently refers to moral truth in a sense far detached from the sense in which any secular person I know of, including people like David Pearce, Sam Harris, and I myself, defend it (or the classical guys, Bentham, Mill and Sidgwick).
Moral realism, on our accounts, merely amounts to conceding that suffering should be avoided, and that happiness should be attained. Or, in the words of David Pearce, that the pain-pleasure axis discloses the world's inbuilt metric of (dis)value, and that acting unethically ultimately is a function of ignorance – ignorance, I would add, about what others feel, and about our not being fundamentally different from others. Which means that, in relation to the specific example, you should not kick squirrels; you should help them if possible.


This leads me to another misunderstanding Brian seems to often repeat in his writings, namely that moral realism implies one right answer. That is not necessarily the case, however. For just as there can be many ways to maintain good health, or many ways to construct a mathematical proof, there can, at least in principle, be many ways to reduce suffering that are equally good.




Some people believe that sufficiently advanced superintelligences will discover the moral truth and hence necessarily do the right things. Thus, it’s claimed, as long as humanity survives and grows more intelligent, the right things will eventually happen. There are two problems with this view. First, Occam’s razor militates against the existence of a moral truth (whatever that’s supposed to mean). Second, even if such moral truth existed, why should a superintelligence care about it?


This strikes me as weird: how can Brian say Occam's razor militates against something he doesn't know what “is supposed to mean”? Dismissing something one does not know what is supposed to mean makes little sense.
Second, Occam's razor does not militate against moral realism, nor could it, since moral realism is not an explanation, it is a fact.


Consider by analogy physical reality: it is not that physical realism explains anything, or is needed to explain anything, it is that the physical world we observe (in our conscious experience, mind you) is evidence of physical reality. Physical realism is not an explanation, it's a fact we observe. Ditto for moral realism: reducing suffering really matters, and this is an empirical truth, one that can only be denied in its absence.


As for Brian's question about why a superintelligence would care about the truth, it seems obvious to me that caring about the truth is exactly what intelligent beings do. Being rational means acting in light of the facts, and hence anything remotely worthy of the name "superintelligence" (I don't like that term, by the way, nor the term "AGI" – both assume far too much), would care about the facts. If not, all you have is superstupidity.


Brian goes on:


There are plenty of brilliant people on Earth today who eat meat. They know perfectly well the suffering that it causes, but their motivational systems aren’t sufficiently engaged by the harm they’re doing to farm animals. The same can be true for superintelligences. Indeed, arbitrary intelligences in mind-space needn’t have even the slightest inklings of empathy for the suffering that sentients experience.


I disagree. To echo Pearce's point about ignorance, people who eat other beings sure are ignorant: ignorant about the suffering they are contributing to.3 The claim that they know "perfectly well the suffering it causes" is, I think, patently false. They know nothing. They may have some abstract notion of "the suffering of animals", but do they really know what it is like to be a chicken boiling alive in a slaughterhouse, or a fish who is pulled up from the bottom of the ocean by a hook in her lips? Hardly, and I'm confident that anyone who did have that experience would both view things and act rather differently. Indeed, none of us know what these horrors are really like. We are all deeply ignorant. Vague abstractions is all we have to act on, but with these abstractions, we can reason us to the conclusion that something significant is going on here. And most people are probably able to perform this reasoning, they have just not done it yet.


After all, there have also been countless brilliant people who did not realize other obvious truths, for instance Newton's laws of motion, yet that does not bring their truth status into question. Sometimes an obvious truth needs to be pointed out clearly before we see it, even when everything we experience is a testimony to it.


Brian's last point about empathy is, I would argue, less relevant than he thinks. For instance, I have encountered a psychopath who did not feel empathy, yet who has realized that suffering matters and therefore advocated for veganism on that ground alone, by virtue of reason. “Suffering is bad for me, and I don't want to be killed, and since I'm not fundamentally different from those beings, I shouldn't contribute to the suffering or death of those beings. Anything else is logically inconsistent, not treating the same in the same way.” Empathy is not needed; knowing suffering is bad can be enough (I'm obviously not saying empathy is not generally useful; that can hardly be doubted).


So I don't buy Brian's note on “superintelligence” above. Any superintelligence worthy of the name would, I would claim, not needlessly harm other beings, which is not to say that I think the AI we develop will automatically, or necessarily at all, converge toward benevolence. Indeed, I doubt AI will even converge toward our abstract conception of "AGI", but that is the subject of another discussion.




Brian's position on ethics seems to be that suffering matters, but not in any true sense. It merely matters to him, and other people like him, and on that basis he tries to minimize it. This I don't understand at all.
If I held Brian's view, I don't see why I would aim to reduce suffering. If it is merely my personal preference that suffering matters, and not a fact – if I believed that suffering is not truly bad – then why would I try to reduce it? I wouldn't. But I cannot possibly make myself believe that suffering is not truly bad and worth avoiding, not just for me or according to me, but for everyone, at all times.


Again, it comes back to our different views of consciousness. I consider it no more suspect to talk about facts of consciousness than to talk about facts of physics. And I don't understand how it could be, after all, to repeat this point once more, every fact we know about physics ultimately appears in our consciousness.


The Future of Suffering
I also have disagreements with Brian's views when it comes to the future of suffering. He appears certain that we will soon give rise to a suffering explosion where we will spread suffering into outer space. As in the case of digital sentience, it is not that I think Brian is necessarily wrong – and the risk he is highlighting is certainly more than worth taking seriously – but I just don't think his great certainty is warranted.


Brian's certainty on this matter rests primarily on his certainty that we can and will create sentient digital computers, and, as argued above, I don't think this certainty is justified. Brian thinks we will build computers full of digital suffering, but whether we in fact will obviously depends, among other things, on whether digital suffering is possible in the first place. We don't know, and I don't think we should claim to.


The risk Brian is outlining is certainly one that we must take seriously, and his writings about it comprise an invaluable input to the broader discussion about our future, a discussion that too often seems to assume that if only we survive various existential risks, the future will almost surely be wonderful. He is certainly right that this is by no means guaranteed. Yet I think Brian is too quick to overlook the possibility that a suffering explosion is not guaranteed either. He criticizes those who believe in a bright future for being under the spell of optimism bias, but one could level a similar criticism against Brian's view: that his view is distorted by a pessimism bias. It is not clear that Brian's vision is the more realistic one compared to, say, David Pearce's, which Brian criticizes. It is not easy to predict what our human-machine civilization will do and become in the future.


If we become able to abolish suffering, as Pearce envisions, then why wouldn't we? This 'if' is of course a big one, yet Pearce has many real-world examples that support the feasibility of his vision: people who are unable to feel pain, people who are unable to feel fear, and people who are always very happy. Not being able to feel fear or pain is of course not always adaptive in today's world, but with a better understanding of how these inabilities to feel suffering arise – not just at the level of genes, but also at the level of the brain – might we not be able to hack our minds so as to become super happy and, if not entirely free of bad sensations, then at least reduce them to a negligible level? It seems to me quite plausible that we will. The field of mind engineering seems to now be at the same level as aerospace engineering was in the 16th century; all but a few of us are profoundly ignorant about the possibilities.


Also, in contrast to Brian's pessimistic view, couldn't one argue that the pain-pleasure axis is the core, underlying driver of our society, albeit in a crude way, and hence that it will gradually guide us away from suffering? One may object that our future is rather determined by market forces, but couldn't one argue that the pain-pleasure axis indeed is the ultimate driver of these, and hence that markets gradually will work us in that direction: away from pain and misery, toward greater happiness, at least for those actively participating in them, which in the future may be an ever-increasing fraction of sentient beings. I don't know, but I don't think the strong causal power of the pain-pleasure axis should be ignored in our thinking about the future.


Another thing I disagree with Brian about is the strategy he proposes with respect to future suffering. As he sums it up:


We should explore ways of increasing empathy that also expose the true extent of suffering in the world, e.g., information about factory farming, brutality in nature, and unfathomable amounts of suffering that may result from space colonization.


Again, while I'm all in favor of promoting empathy, I don't believe empathy is the most important thing to spread. Rather, it is reason. The abolition of slavery and the acceptance of gay marriage did not happen by virtue of empathy first and foremost. Rather, it was ethical arguments – the realization that discrimination is unjustified. The latter part of the quote above, I could not agree more with: we have far too rosy ideas about the world – that “nature” is an acceptable circumstance to live under, for instance – and we certainly should expose that. But it is reason rather than (immediate) emotions that should guide our efforts. Rather than exposing suffering and making people feel its badness, I believe we should expose suffering and make people realize its badness, thereby tapping into prefrontal capacities rather than relying mainly on our unreliable limbic reactions.4


I must also take issue with this statement from Brian:


Thus, it seems that discussing animal suffering in the right way can serve as a reality check against excess optimism, in stark contrast to promoting a Hedonistic Imperative vision that confuses the marginal impact of our efforts to make the world better with the overall probability that the world actually will become a delightful place for all.


Concerning a “Hedonistic Imperative vision”, the vision outlined in HI – that the abolition of suffering is likely to happen – is not one that rests on the success of activists and authors like ourselves. Indeed, Pearce is somewhat of a pessimist when it comes to the power of ethical arguments to change things. The reason he views it as realistic is that various forms of technology will make it possible to gradually phase out suffering, that these technologies will become widely available, and that people will not need to be persuaded to embrace these technologies once properly developed. The marginal impact of any one person is certainly not overstated in HI or the vision it presents.


I would add, concerning Brian's view that any single person is unlikely to make a difference that matters much in the bigger picture, that while this view is true, it is also true that small changes today can result in big differences in outcome tomorrow, and hence that the motion of our tiny wings at least do have a small chance of actually making a major impact. By analogy, had things been just a bit different on this planet four billion years ago, there would probably have been no life at all on our planet – after all, all life on Earth has arisen from the same original life form, a single tiny molecule. Who says we are not at a similarly decisive point today?


Analogously to the argument that reducing existential risk by even a tiny bit has enormous expected value (an argument, it's important to stress, that Brian does not buy given his view of the future; I also happen to think that it is at least overstated), one could argue that the miniscule chance there is for making a major impact on how things unfold is one that we should shoot and aim for. In my own case, at least, aiming for this does not prevent me from succeeding in making a more modest impact; if the former fails, I will most likely succeed at the latter.


We obviously should not estimate what the future is likely to be based on our individual impacts, but 1) I don't see the HI vision as doing or promoting that, and 2) not doing so does not prevent us from aiming high, knowing full well that one's individual efforts most likely will not move things a single nanometer in the bigger picture.


Brian does not only worry about digital sentience, however. Even if that is not possible, Brian still thinks we are most likely to spread suffering rather than reduce it, for instance by spreading Earth-like nature and the suffering it contains. I don't consider this likely, because it seems to me that what rules the day with respect to people's views of nature is status quo bias, which works against spreading nature. That there are some people who see the spread of nature as an ethical obligation no more makes that a likely scenario than the fact that there are a some people who are trying sincerely to blow up Earth makes that likely. So I don't share Brian's pessimistic view on this issue. It's a serious risk for sure, but a very unlikely one, it seems to me.


In sum, I think Brian's certainty in an impending suffering explosion is too great, although I do fear he could turn out to be right. It seems to me that when it comes to where on the hedonic scale our future will play out, it is appropriate to apply a very wide probability distribution, especially given that we don't yet have a deep understanding the basis of hedonic tone and the extent to which it can be hacked. Brian seems to have confined his distribution to an area that lies far below basement level. An unjustified narrowness, it seems to me. [Edit 21-08-15: I retract this remark, as I realize it is not an accurate representation of Brian's views. See the response section below.]


The Importance of Veganism
The last major disagreement I shall air here is about the importance of veganism. It is not that Brian is not in favor of veganism by any means, but he has certain reservations about it that I do not. Brian fears that veganism might lead to an increase in environmentalism, in the sense of nature conservation, and hence that it could risk increasing suffering in nature. I certainly understand the worry, and agree that conservationism is potentially among the most harmful ideas in the entire realm of human thought (see my book on speciesism). But I don't think veganism leads to more of that. I think it leads away from it.


I understand where Brian is coming from: he has had discussions with vegans, and the majority, I would guess, have defended leaving the hellhole that is nature alone. Yet it would be a fallacy to claim that because most vegans support leaving nature alone, the spread of veganism will lead to greater support for leaving nature alone. “Non-interference” seems to be the predominant view among both vegans and non-vegans, and thus its high prevalence among vegans does not say much.


The core issue here is suffering in nature, so I think it's worth asking the question: who is most likely to care about suffering in nature, someone who is vegan for ethical reasons or someone who is not? It seems clear to me that the answer is the former, which is of course not to say that vegans will be less likely to favor “letting them be”, given that they believe that is best for them, as most probably do. But it seems quite clear to me that vegans are far more receptive to the message of the seriousness of suffering in nature – it often just needs pointing out.


For example, among the few vegans whom I have spoken about the issue with, I have only met agreement: yes, we should help beings in nature if we can rather than leave them to suffer. I have met nothing like it among non-vegan friends, and it is a much larger sample. In the latter group, the idea of taking suffering in nature seriously is just seen as absurd, almost at Mormonism level.


But this should not be surprising. What veganism amounts to is the rejection of the exploitation and killing of non-human beings; in other words, it is to put a bare minimum of moral concern for non-human beings into practice. And genuine moral concern for non-human beings is exactly what must be established in order for us to take the suffering of non-human beings seriously. I maintain that there is no way around veganism for the establishment of such moral concern.


Consider the matter in a human context: can anyone who consumes products obtained from a group of enslaved humans who will eventually all be killed claim to have genuine moral concern for these people, much less be able to think clearly about them in ethical terms? For example, can someone who wears the skin of killed humans who belong to a certain group of people claim to take that group of people seriously in moral terms? It is clear to me the answer is "no", and the same holds true in the case of non-human beings.


Anything short of veganism amounts to the normalization and trivialization of the exploitation of non-human beings, and a perpetuation of our profound moral confusion with regard to them. The kind of confusion that makes us say that it might be good to raise and kill non-human beings in order to reduce suffering in nature – something we would never be tempted to propose in a human context, and I strongly believe that that is not the way to reduce suffering in nature, but only a way to perpetuate moral confusion – or the kind of confusion that holds that suffering in nature is not a problem. This latter confusion is one that plagues a great proportion of vegans of course, but I dare claim that 1) that proportion is most likely not higher than among non-vegans, and 2) vegans will be in a much better position to step beyond this confusion, given that they are presented with the facts of the matter.


What we need in order to reduce suffering in nature is genuine moral concern for non-human beings, and this can never be fostered as long as we support the exploitation and killing of them ourselves (again, think of the analogous scenario in the context of humans). As I wrote in my book on speciesism, overcoming speciesism is not merely a matter of rejecting or adopting a certain propositional belief. Rather, it is about getting beyond a deeply entrenched attitude. And simply saying and thinking that we care about “animal suffering” and want to reduce it is not remotely enough to get us beyond that attitude. We must take the products of their suffering and death out of our mouths and off our bodies too. That is the first step we must take to get beyond that attitude. The very first step away from speciesism.


So, in a nutshell, this is my disagreement with Brian on this issue: I see veganism as a moral minimum and as a necessary, indeed inescapable, first stepping stone on the way to higher moral ground. If we don't get this right, and don't fundamentally change our attitudes, we stand no chance of advancing much.


I could keep on writing about how my views differ from Brian's, but I think this is more than enough for now. I welcome Brian's response to any of my comments above, and hope he will keep on challenging and enlightening me with his writings.


1 I actually realized another good reason to make conversations public while writing this: it gives one an incentive to take more time, think deeper and elaborate more compared to what one would do in an email.
2 It is worth noting that this hints at a recurrent theme in my disagreements with Brian: he holds what can be characterized as a subjectivist/constructivist position on many things, whereas I reject that (in those three parts that build on top of each other; more elaboration is found here) – my views can be characterized as more objectivist/realist. (For a short discussion on the terms 'objective' and 'subjective', see: http://magnusvinding.blogspot.dk/2014/04/is-it-objective-or-subjective-clearing.html)
3 And it is worth noting that intelligence and experience/learning/knowledge are inextricably connected.
4 Perhaps Brian's view does not support this, since there ultimately is nothing more reasonable about minimizing suffering than there is about, say, maximizing paperclips on his view, and hence that empathy is all we have to rely on in the end when it comes to convincing people to reduce suffering. I doubt he would say that, but I don't know. As an empirical matter, however, it seems clear to me that reason is the prime force of moral progress, not empathy.




Response [21-08-15]

Brian wrote a response to my post above. Here is my response to his response (the text with grey background is from his response).

Magnus: When it comes to our knowledge, consciousness is that space in which all appearances appear, the primary condition for knowing anything

If you believe that philosophical zombies are possible, then you'd agree that they can know things without being conscious. Even if you deny that zombies are possible, you might believe it possible that a sufficiently intelligent computer system could "know things" and act on its knowledge without being conscious.

The point I was making in that passage was about our knowledge: the only evidence any of us have for the existence of physical space, or any aspect of physical reality, is appearances in consciousness ultimately. So how can one deny the existence of consciousness based on an appearance in consciousness? That is what I don't understand.

As for the question, I don't believe in zombies or the dualistic notions they invoke. Second, I would deny that a zombie-entity, i.e. something that is not conscious, knows anything in the sense I referred to above. Sure, something non-conscious, like a book or a search engine, can store and present information, but that does not mean they know anything. To take suffering as an example, it would not matter how much information about it one can retrieve and "act on"; if one has not experienced it, one does not know it (and thus cannot really be said to act on it, at least not on an understanding of it).

Magnus: denying that consciousness exists is like denying that physical space exists.

Yes, both are matters of faith, but I find that the intuitive appeal of a physical ontology is stronger than that of a physical + mental ontology or a purely mental ontology.

I disagree. That physical space exists and that consciousness exists are both claims made based on observation, not faith. The fact that a claim is based on direct observation rather than derivation does not make it invalid or "based on faith." The values of physical constants are also ultimately observed, not derived – one can relate these constants to each other, yes, but their values are ultimately based on measurement – yet I hope we can all agree that those measurements, like the speed of light, for instance, are not a matter of faith. No more is the belief in the existence of the physical based on faith. We believe in it based on observation.

I would argue that no faith is needed in our epistemology. If you want to say there is a leap of faith anywhere, the leap is to assert that we can at all trust our observations in the first place (I here use the term 'observations' in the widest sense: all we experience). Yet this is as uncontroversial as can be, not least because we actually cannot doubt it. For in order to doubt that one can at all trust one's own experience, one must trust one's own experience – at least the part of it that says that you doubt, or should doubt, your own experience. So we really have no choice: we can trust at least some aspects of our experience. That is the assumption we cannot not make (do try).

The question then becomes which observations we can trust and which we cannot, but that is a matter that is to be sorted out – indeed only can be sorted out – with more experience; not faith. “Is what I am seeing now an illusion or not?” Only more observations, if anything, can settle the matter. “Which of these two conflicting theories or methods are most reliable?” Only more observation – for instance, looking closer at our data or obtaining new data – can settle the matter. But faith, I would argue, is not needed, apart from faith in the literally undoubtable.


As for the point about physical + mental ontology, I don't buy any such dichotomy. We agree there is one world; we just disagree about the reality of/how to think about, certain aspects of that world.

Magnus: If you don't believe consciousness exists, how can you hold that consciousness is the product of computation?

Denying consciousness is a strategic way to explain how my view differs from naive consciousness realism. I don't deny that there are processes that deserve the label of "consciousness" more than others. In this section, I draw an analogy with élan vital. To distinguish my position from vitalism, I would say that "life force" doesn't exist. But it's certainly true that organisms do have a kind of vital force of life to them; it's just that there's no extra ontological baggage behind the physical processes that constitute life force, and it's a matter of interpretation whether a given physical process should be considered "alive".

Your analogy to vitalism does not support your view about digital sentience, though. In the case of vitalism, we now see how the complex features of life can be explained by underlying, lawful mechanisms, and, crucially, we know which mechanisms (more or less; many mysteries remain, of course, but we do understand the core principles). Crudely speaking, we know it is DNA rather than a life force that is at work. When it comes to consciousness, I agree we can tell the same story at the most basic level: it is delicate mechanism rather than a mysterious soul that is at work. But, critically, we do not know the necessary and sufficient “requirements.” We have not yet discovered the DNA of the soul. You assert that what we can both agree to call consciousness, whether we think it deserves to be considered real or not, is a product of a certain algorithm that could be implemented on my laptop. I am open to your conjecture, but I don't think it is more than that.

Magnus: Furthermore, if I were to accept Brian's constructivist view, the view that we can simply decide where to draw the line about who or what is conscious or not, I guess I could simply reject his claim that flavors of consciousness are flavors of computation – that is not how I choose to view consciousness, and hence that is not what consciousness is on my interpretation. That “decision” of mine would be perfectly valid on Brian's account

Yes, though I would oppose it insofar as I don't like that decision.

So if someone were torturing a child based on the deciding that the child is not experiencing conscious suffering, you would say that there is nothing really wrong with that person's view? That person is not wrong about whether the child is actually suffering, but only make a different interpretation from you, an interpretation that is ultimately just as valid as yours? And is the fact that you don't like this interpretation the strongest thing you can say against it?

Arguments over how to define "consciousness" are like arguments between liberals vs. conservatives over how to define "equality".

I disagree. As I noted, I don't think there is any need to define consciousness, nor is it possible. Again, consciousness is just this. We all know it, and, in a very real sense, we actually don't know anything else.
A different issue, then, is what the physical basis of different states of consciousness is, and that, I would claim, is an open empirical question; certainly not purely a matter of interpretation.

Magnus: I need make no extra postulate than that suffering really exists and matters,

I don't know what it means (ontologically) for something to "matter".

To my mind this is tantamount not knowing what it means for something to be red, or for a number to be greater than two. I can only point to the experience of red, and say that that is what “red” means, or point at a number line and say that all the numbers "to the right" of two are greater than two. Similarly, I can only point toward the sensations below hedonic zero and say that avoiding those really matters. I cannot explain what it means for something to matter. Like redness, “mattering”, or simply "value", is inherent to those experiences. And here we get back to the issue of the reality of consciousness. I say some things really do matter, not according to me, but inherently.

Magnus: Moral realism, on our accounts, merely amounts to conceding that suffering should be avoided, and that happiness should be attained.

Taboo "should".

I am not sure what you mean here. “We shouldn't say 'should'”? That would be a self-contradiction. “You should unpack what you mean”? I cannot unpack the term 'should', as I see it as a brute concept that cannot be conveyed by anything but synonyms, like 'ought' or 'normative'. And we indeed cannot avoid 'shoulds' or normativity. For example, by engaging in debate and trying to reason we have already embraced countless “shoulds”. To be rational is to embrace certain values, certain 'shoulds' (e.g. you should be consistent, you should follow evidence), so it is rather self-defeating to “rationalist taboo” 'should' or normativity. By doing so, one removes the very foundation for any form of rationality.

We have no choice but to embrace 'shoulds' – even to claim that 'shoulds' have no validity, one must rely on the validity of certain 'shoulds' (e.g. what something should live up to in order to be valid). So my question is: how come we can accept the validity of one set of 'shoulds', those of logic and empiricism, but not the validity of the 'should' that we should reduce suffering? I would claim the validity of the latter is on just as firm ground.

For a defense of the 'should' in my quote above, I again refer to my book Moral Truths (especially the third and fourth chapter).

Magnus: He appears certain that we will soon give rise to a suffering explosion where we will spread suffering into outer space.

That seems very likely conditional on space colonization happening.

It also depends on what the world will look like when it happens. If the world will look much like it does today, I agree, but I am not so sure that will be the case. It also depends on whether digital sentience is possible (although I recognize you expect much suffering regardless; again, I'm not so sure).

Magnus: He criticizes those who believe in a bright future for being under the spell of optimism bias, but one could level a similar criticism against Brian's view: that his view is distorted by a pessimism bias.

I think space colonization would also result in a "happiness explosion", with the expected amount of happiness as judged by a typical person on Earth plausibly exceeding the expected amount of suffering. But I think we should give special moral weight to suffering, which means that the potential explosion of suffering beings isn't "outweighed" by the potential to also create more happy beings.

We agree about the primacy of suffering. I should have been more specific: what I referred to by “those who believe in a bright future” in the passage above was those who believe that we can abolish suffering. It is not clear to me that this view suffers any more from optimism bias than the view that the future will contain an immense number of computers full of suffering is tainted by pessimism bias. There are big uncertainties involved in both. In the latter, there is the issue of digital sentience, the possibility of which I would still maintain remains an open question.

Magnus: If we become able to abolish suffering, as Pearce envisions, then why wouldn't we?

If we became able to feed ourselves without killing animals, then why wouldn't we?
If we became able to distribute wealth more evenly to prevent homelessness, then why wouldn't we?
If it were possible to eliminate wars by being more caring for one another, then why wouldn't we?

Two preliminary remarks. First, it is worth keeping the meaning of “becoming able” in mind in this context. What I meant by becoming able to abolish suffering in the remark above is its becoming technically possible – i.e. we know how to do it. Presumably you meant something along the same lines by the term in your questions above. The second and much related point is that from the time that something becomes technically possible, some time must pass until it becomes implemented in practice.
The first question above refers to something that is now technically possible, so why hasn't it happened? It seems to me the answer is time. Imagine someone in the year 1940 asking: if we can eradicate smallpox, why don't we?

Well, for one, the resources probably weren't there to do it any time soon. Second, we did indeed eradicate smallpox completely, but it took some decades. In 1980, the single biggest killer of the 20th century was declared eradicated.

To answer each of the questions above in turn:

Why do we exploit and kill beings in order to eat them when there are alternatives?
It seems to me we are gradually moving away from that, by virtue of both ethical and technological progress. Why is it not happening faster? Partly because eating other beings is still the easier and most comfortable thing for most people to do, and because changing the attitudes of everyone simply cannot be done in a few years. Unfortunately, progress takes time.

If we became able to distribute wealth more evenly to prevent homelessness, then why wouldn't we?
Arguably, ending homelessness does not fit the model of something we could do today, but choose not to, and it is certainly not a problem that can be solved by means of wealth distribution, at least not on its own. However, I suppose homelessness is not the point of the question, but rather why we allow poverty in the world when we have so much wealth?

This is another question whose time is running out. We are getting exponentially richer over time, both at the global level and at the level of most individual nations, so I think it is only a matter of time until everyone will have a guaranteed basic income. Either way, poverty is on its way out.

If it were possible to eliminate wars by being more caring for one another, then why wouldn't we?
I don't think ending war fits the model of something that we could do and know how to do, but choose not to. “We”, as in rich democracies, are doing our best to end war, as they are simply too costly. As for ending wars by being more caring for one another, I don't think that is realistic. Yes, if every person on the planet fell into some magic MDMA potion Obelix-style, wars would probably become a lot harder to initiate. But even if it were possible to bring such empathy and care about, I would still put my trust in education, democracy and trade over “caring” in order to ensure peace.

To return to the main theme, whether we will end suffering, Pearce's view is that the eradication of suffering will eventually become as easy as the eradication of smallpox did (and I don't wish to understate the difficulty of the latter, but it became doable). And if he turns out to be right, then, I repeat my question: why would we not eradicate suffering?1


Magnus: One may object that our future is rather determined by market forces, but couldn't one argue that the pain-pleasure axis indeed is the ultimate driver of these, and hence that markets gradually will work us in that direction: away from pain and misery, toward greater happiness, at least for those actively participating in them, which in the future may be an ever-increasing fraction of sentient beings.

I'm doubtful that non-human animals will begin holding economic wealth and making trades with humans. Advanced digital intelligences probably will, but lower-level "suffering subroutines" will probably not. At present, and plausibly in the future, most sentience relies on altruism in order to be cared about by powerful agents.

My point was that an ever-increasing fraction of sentient beings on the planet is comprised by humans. Either way, what does seem undeniable is that there are powerful forces pushing us toward the discovery of treatments against suffering in its various forms. And if we discover effective treatments of that kind, it may eventually require just a trivial amount of altruism to relieve the suffering of other beings, cf. the eradication of smallpox.

Also, Robin Hanson's Malthusian scenario is one example where actors in a market economy may be driven into potentially miserable lives despite being able to buy and sell goods as rational agents.

I don't find Hanson's em scenario likely, even granting the assumption that digital sentience is possible. But even if something like Hanson's scenario were to happen, this would not need to be bad, as poor minds would not need to be unhappy. Being unhappy about being poor is a Darwinian reaction to low status that another mind design would not need to have. And such emotions are not even necessary for Darwinian humans. Contemplative practice, for instance, seems to be enough to make at least some people able to be perfectly content with owning nothing, so what couldn't direct mind editing do?

To think that the predispositions of the human mind that are universal among Darwinian humans will be universal in all minds is a big mistake. I don't understand why Hanson assumes that future minds will be very much like the minds around today at all. If we can upload and edit our minds (and editing would indeed be necessary, for how else would one make a mind interact with any kind of virtual body and world – I would very much like to see how that problem can be solved), why would we keep the mind design that evolution happened to throw up?

On a more general note, I share David Pearce's view that the most significant difference between our descendants and us will be found, not in the world around them, but in the radically different minds they will have.

Magnus: Even if [digital sentience] is not possible, Brian still thinks we are most likely to spread suffering rather than reduce it, for instance by spreading Earth-like nature and the suffering it contains. I don't consider this likely, because what rules the day with respect to people's views of nature is status quo bias, which works against spreading nature.

That might be true regarding directed panspermia (we can hope), but at least when it comes to terraforming, the economic incentive would be very strong (in futures where digital intelligence doesn't supplant biological humans). People have no qualms about starting farms in an area (disrupting the status quo) to feed and clothe humans. Likewise when it comes to terraforming other planets so that they can eventually support farms.

Establishing farms on other planets would be very expensive. I suspect farms as we know them will be outdated technology by the time we will be able, technologically and economically, to establish farms with sentient life on other planets, and also a non-starter ethically. But the risk is worth minimizing, of course, which just recommends continued anti-speciesist advocacy, as far as I can tell.

Magnus: It seems to me that when it comes to where on the hedonic scale our future will play out, it is appropriate to apply a very wide probability distribution [...]. Brian seems to have confined his distribution to an area that lies far below basement level. An unjustified narrowness, it seems to me.

It depends whether you're thinking of "happiness minus suffering" or just "suffering". I claim that the "suffering" dimension and the "happiness" dimension will both explode if space is colonized. I'm more agnostic on the sign of "happiness minus suffering" (relative to a typical person's assessments of those quantities), but I don't think "happiness minus suffering" is the right metric for whether space colonization is good, since suffering has higher moral priority than happiness.

Yes. I withdraw my confused remark above. I agree that we should focus on the suffering created by a given set of actions minus the suffering that would follow in the absence of those actions – that this is the main ethical concern (which then implies that any risk of our giving rise to suffering, and to how much, must be viewed in light of this calculation). I would still say, though, that the point about applying a wide probability distribution holds true: how much suffering that we will give rise to in the future is highly uncertain, as is how much would follow in the absence of a human future, which renders the difference between the two “double uncertain.”

You state a 65 percent confidence that we will cause more suffering than we will prevent; I'm wholly agnostic. It seems difficult to make qualified estimates given the enormous uncertainties – e.g. is digital sentience possible? To what extent will we continue to progress ethically? Or will we regress? Is suffering an eradicable ailment? Will civilizations emerge elsewhere in our galaxy? I don't think these are questions to which we can give confident answers, and hence I don't think strong positions about these specific questions, or about how much suffering we are likely to cause in the future, are warranted. 65 percent confidence that we will cause more suffering seems too strong in my view. What am I missing?

Magnus: “Non-interference” seems to be the predominant view among both vegans and non-vegans

Well, the rate of conservationism is higher among vegans than in the general population (since vegans tend to be liberal, and liberals tend to support ecological preservation).

But that is irrelevant with regard to whether veganism leads to increased conservationism. Is it a selection effect or a “treatment” effect? I very much doubt there is much of a treatment effect going on, if any (see below for some important clarification). It seems to me wrongheaded to withhold our efforts toward spreading veganism because we worry that it may lead to increased conservationism, especially when we have no evidence that veganism leads to increased conservationist attitudes, and when it has obvious benefits both for the beings we will otherwise exploit and for our attitudes with regard to non-human beings in general.

We should be careful to distinguish spreading veganism from certain organizations that are dedicated to spreading conservationism and veganism. I agree that many of the things that such organizations and “rainbow rhythms vegans” promote are problematic, even positively harmful, the most obvious of them being an idyllic view of nature and the conservationist position that follows from it. But what I am arguing for the importance of is veganism, not particular organizations or quirks shared by a large proportion of those who are vegan today. There is a difference, and you seem to overlook it in your analysis of the matter.

Magnus: And genuine moral concern for non-human beings is exactly what must be established in order for us to take the suffering of non-human beings seriously. I maintain that there is no way around veganism for the establishment of such moral concern.

Many transhumanists aren't vegan but care about wild-animal suffering.

And it is indeed inconsistent to care about non-human beings and then support the exploitation and killing of them at the same time. Generally, I am sure most of us want to see the suffering of non-human beings reduced, at least if asked about it in a multiple choice query. The problem is just that we don't act on it at all (largely due to the “happy meat” delusion – that most insidious feat of mind acrobatics that tells us that we can exploit and kill beings, and that we can then “treat them well” and have genuine moral concern for them at the same time).

That, in a nutshell, is why we need a change in attitude, and not merely propositional beliefs à la “we should minimize suffering in nature.” It becomes like the Founding Fathers who promoted personal freedom, yet still owned slaves. As long as they did the latter, they did not really do the former. And merely saying one is in favor of personal freedom, or “reducing suffering in nature”, is not enough and by no means an end in itself. Just as releasing one's slaves (should one have any) must be considered the minimum requirement for being in alignment with the promotion of personal freedom, embracing veganism – that is, not directly supporting our deliberate abuse and killing of non-human beings – is the very least one must do in order to act in alignment with genuine moral concern for non-human beings. I still maintain veganism is a minimum requirement; for being ethically consistent, for standing the slightest chance of being able to think clearly about non-human beings and our obligations toward them, and for living up to those obligations.

Moreover, I am sure that the non-vegan transhumanists you refer to would not become worse advocates by discontinuing their support of practices that deliberately exploit the beings they profess to care about. On the contrary, cf. the point about the cultivation of more ethically sane attitudes, I am sure they would gain a much clearer view of the issue, and would be able to care much deeper about suffering in nature. It is hard to put in words just how harmful our normalization of the exploitation and killing of other beings is.


1 A point worth adding is that, as David Pearce often points out, suffering may comprise one of the greatest underlying existential risks, as those who suffer are those who are most likely to try to destroy the world, which adds yet another incentive for society to try to reduce suffering. And not just the suffering of humans; the "only one solution" group, for instance, wants to blow the world up primarily because of the suffering of non-human beings. Not that a small group of people are likely to blow the world up, of course, but they sure can cause great damage trying.


Youtube Conversation [20-12-15]

Brian and I recently had a conversation where we discussed some of the subjects discussed above and more. You can watch it (in two parts) on Brian's Youtube channel: https://www.youtube.com/user/Prioritarian

Below are a few remarks I wrote to Brian (as a Youtube comment) after the exchange


It was a pleasure to finally discuss the fundamental issues directly – consciousness, epistemology, truth etc. – although I’m sure we both felt a bit frustrated too. These subjects really are difficult to talk about, especially given the enormous difference in our perspectives on these matters, the difference that in fact is the very issue at stake. And it is worth noting, I think, that this difference in perspective also gives rise to another big difference between us, since our respective views of these fundamental matters also determine how we view this conversation altogether.

You seem to hold that these subjects are not among the most important ones we could be discussing, and that little rests on our disagreement (in practical terms; correct me if I'm wrong). I disagree. I don’t think these issues are of minor importance. I think we need the truth desperately in order to act sensibly in the world, and I honestly think that the many shades of belief in the notion there are no hard truths – for instance about consciousness and about how to reason correctly – are dangerous falsities.

Yet this alone does not explain the whole difference in how we approach the conversation. Another reason why it makes sense that you do not wish to rest on these subjects in order for us to get them right is that you don’t think there ultimately is anything that is right or wrong about them, which is of course the essence of our disagreement on these subjects. I think there is a truth of the matter. These differences in how we view this conversation (i.e. the broader discussion about foundations) are worth making explicit in order to understand the context of our particular conversation, I think.


On the subject of truth, you seemed to endorse the pragmatist view that there are no truths ultimately; what we call truths are merely things that work, they are not really true ultimately. Yet this view is self-defeating, as it itself makes a strong realist claim, the claim that no claims are true ultimately. And note that one cannot save this pragmatist claim by saying that it applies to everything but itself, for one will thereby merely have conceded that realism is true (now one merely stands by it). Another reason one cannot meaningfully make that move – claiming that there are no truths, just certain moves that work, except for the one truth that there are no truths, merely certain moves that work – is that making such a claim actually would imply that there is an infinite number of hard truths, since according to this claim it is a truth, in a hard sense, that all other statements are not true in a hard sense, which of course contradicts, and in fact thereby renders incoherent, this ad hoc claim. Either way, pragmatism does not work. There is no escaping realism, the truth that there are truths – a truth that any attempt at falsification will only end up validating. The only thing we can do (meaningfully and coherently) is to admit the truth of realism. Two plus two really is four. Or so I maintain.

I of course agree that we choose descriptions that work, but this is not in conflict with realism, but completely in line with it: Descriptions work, indeed only can work, because there is an underlying reality that is in correspondence with them in some way.

As for consciousness and tables, I believe there are underlying truths to be known here too, about both, and hence I think your analogy to tables fails to serve the purpose you seem to want it to. It is of course not the case that we need tie the properties we associate with a table to the term ‘table’ – ‘Tisch’ or ‘bord’ can surely do just as well. We are free to tie whatever meaning, or set of meanings, we want to different terms. Where our freedom ends, however, is when it comes to the world’s underlying properties themselves. The concept of a table is of course rather multi-faceted and ill-defined, yet the properties of, say, solidity and plane surfaces are generally included in the set of properties that define a table, and these underlying properties (of things that have them) are real, and not something we can change merely by thinking differently about them. We cannot alter or remove the solidity or shape of something by redefining or thinking differently about them.

In this sense, I agree tables are analogous to consciousness. We can tie any term we wish to the phenomenon that is conscious experience, yet this does not change the fact that what we are referring to when we talk about consciousness, including suffering in particular, is a real underlying phenomenon, or multitude of phenomena, if you will. You seem to believe otherwise, but correct me if I’m wrong.