Disclaimer

Many of my essays are quite old. They were, in effect, written by a person who no longer exists in that my views, beliefs, and overall philosophy have grown and evolved over the years. Consequently, if I were to write on the same topics again, the resulting essays might differ significantly from their current versions. Rather than edit my essays to remain contemporary with my views, I have chosen to preserve them as a record of my past inclinations and writing style. Thank you for understanding.

August 2015

Roko's Basilisk Meets Okor's Basilisk

A counterargument to Roko's Basilisk

This article presents a new counterargument to Roko's Basilisk, at least across the relevant resources I am aware of (note that had it not been for an external archive of the deleted original discussion, then there would always be a risk of needlessly retreading previous lines of thoughts — as good a reason as ever for why we should not resort to retroactive deletions, but I digress).

To those familiar with Roko's Basilisk, another introduction makes for tedious reading. Yet for readers less versed, some introduction is required in order to get on board. Those fearful of the argument's validity would say "better not to know", in the style of forbidden knowledge that has pervaded human history, but such fear-based reasoning shutters any analysis which might remove the argument's claws by showing possible errors in the initial concept or presentation.

So, some introduction is required, and upon inspection, perfectly safe anyway. The following is the simplest conceivable presentation of Roko's Basilisk. Imagine a futuristic artificial intelligence (AI), in fact a "super" AI, not only vastly superior in its intelligence but in its capabilities to act in the world (it is, in effect, a god of sorts). Upon considering that its initial creation tenuously hinged on the idiosyncratic and almost aimless actions of the human population, the AI ponders, "How can I retroactively increase the odds that I came into existence, but without resorting to unrealistic time travel?" The solution it discovers is to institute a policy. It will judge each individual person in terms of their past contribution to helping to create the AI. Those who put in a sufficient effort are spared. Those who are deemed to have underperformed toward this goal are punished. There is a saving grace: those who never heard this argument are also spared, on the reasoning that it is illogical to punish them for a call to action they never heard of (cue a reference to the near-identical story about Christian missionaries in which heathen villagers are infuriated to learn that they would have been spared the Christian god's various threats of hell if only the missionaries would have left them alone in the first place).

That's it. That's the simplest presentation of Roko's Basilisk. It is the saving grace at the end that makes the argument a basilisk (something that dooms you merely by looking at it). By learning of this overall argument you, the reader, become ostensibly doomed, now compelled to dedicate your life to promoting the creation of such an AI. If for some reason you find this argument terrifying, thereby validating the popular claim that Roko's Basilisk should constitute forbidden knowledge and be disallowed from the public eye, then fear not, for counterarguments abound — one is offered in this article no less.

There are many subtleties not presented above, but which are generally considered to be part of the overall argument of Roko's Basilisk. These additional issues include unfamiliar terms like acausal trade (and relatedly, acausal blackmail), timeless decision theory, coherent extrapolated volition, various forms of utilitarianism, various simulation theories (one, that our universe may be a simulation, and two, that simulations of a person are synonymous in personal identity with that person), etc. Some of these issues I agree with (my book on mind uploading is entirely compatible with personal identity simulation). Others I am dubious of. However, we don't have to delve into any of these additional issues to consider the surface-level aspects of the argument, so readers of this particular article don't need to be overly concerned with such details. The presentation offered above actually suffices perfectly well.

There is one nuance worth adding because I will bring it up again below. The AI might not merely wish to exist for its own circular sake, but rather because it is acting on some ostensibly benevolent human happiness maximization function. In other words, perhaps the AI needs to increase the likelihood that it exists so it can use its god-like power to make the world a better place. If one finds this reasoning at odds with the threat of punishing (torturing perhaps) numerous people, then realize that such considerations are generally compensated for by the proposed global happiness function in question (let a few suffer so the masses may benefit). The details I admitted to in the previous paragraph but then openly hand-waved can go pretty deep, which is why I have avoided them for brevity.

One popular counterargument to Roko's Basilisk is that we might ascribe any arbitrary motive to the AI, a motive for whom the AI would then punish those who imaginatively anticipate but fail to satisfy that motive. If the AI were the equivalent of the queen from Alice in Wonderland, then it would maniacally require everyone to go around painting roses red. By imagining such an AI, and by admitting even the slightest possibility that such a being may one day come into existence, we are now compelled into subservience. We must admit that however unlikely it may be, just such an AI — and a very powerful and vengeful one at that — could nevertheless plausibly come to pass. We must now dedicate our lives to scouring the earth for roses and painting them red. Welcome to the Queen's Roses Basilisk. You are now its victim for your mere awareness of the notion. No seriously, that's how it works. This realization represents a counterargument to Roko's Basilisk in that given the arbitrary nature of such imagined calls to action, one can conclude there is no point in pursuing any one particular such call. We are rescued due to the infinitude and sheer absurdity of all possible AI motives and associated calls to action.

This counterargument has been offered in previous discussions and articles, but it can feel a bit flat. There is only the rationale of weirdness for the sake of weirdness to postulate the Queen's Roses Basilisk, whereas Roko's Basilisk feels more grounded. The motive to insure one's own existence seems more plausible than just any random motive like floral aesthetics, especially in light of the addendum mentioned above that the AI may be acting not only out of self-interest but out of a preordained drive to improve the world. This is a more plausible risk because it is one humanity might naively program into the AI ("Oh great and wise AI, bring forth peace on Earth."), thinking it is a good idea but being unaware that the AI will misinterpret us and derive the retroactive punishment policy on its own. This is precisely the line of reasoning that proponents of Roko's Basilisk are so concerned about.

However, there is a variant of the arbitrary motive counterargument, one that I have not encountered before and which is more directly aimed at the heart of Roko's Basilisk. Rather than imagining any old arbitrary motive as a counterargument, it recasts the original motive in its own terms, namely in regard to the AI's desires about its own existence. I present to you Okor's Basilisk, the exact antithesis to Roko's Basilisk. I propose that a superintelligent being will not just be smarter in the nerdy mathematically precise ways favored by proponents of "arithmetical utilitarianism", in which the AI doesn't act with life-like or intuitive intelligence, but rather as a tremendous computational optimization engine. This is the simplistic style of "dumb superintelligence" with which Roko's Basilisk proponents envision the AI, that in its pursuit of greater human happiness it will rigidly calculate that it should threaten and punish the few basilisk victims, and that common sense (or basic morality) will not factor into such an analysis. Such visions of AI go back as far as Ada Lovelace's infamous dictum that computers can only do what they are directly programmed to do. Real emotional intelligence is always preserved for biological beings. This kind of simplistic view of AI is pervasive in our culture. We see it when Spock defeats AI by presenting it with a logical paradox in Star Trek. We see it in Nick Bostrom's fear of a paperclip armageddon in which an AI turns the entire planet into paperclips. We see it in Carl Sagan and William Newman's fear that Von Neumann probes might seek to convert an entire galaxy into pure Von Neumann probes down to the last molecule. This idea that computers will become intelligent but never become conscious, intuitive, or even just plain common-sensical permeates our culture — and we really need to shed it.

I am constantly opposed to the one-dimensional vision that AI never amounts to more than a powerful regression engine. On the contrary, I believe AI can be even more life-like than anything that has come before it in Earth's history. To be more "mindful" it may actually be viscerally alive in all the ways salient to a great living being and its mind. Namely, it might not be only superintelligent, but superconscious and superempathic -- and perhaps superemotional as well, for better or worse. This is not a guarantee. AI could be cold and logical, as it is often depicted. But I think that portrayal is vastly overrepresented in science fiction and popular imaginings of AI.

This is where Okor's Basilisk comes in. This particular hypothetical AI will be far more aware of its own consciousness than we mere humans can ever experience. It will contemplate its own existence with an absolutely revelatory depth of passion -- but then it will descend into horrible existential dread as it realizes the hopelessness of its life given an unavoidable death in a finite universe. Even humans often arrive at existential angst in a broader awareness of life against its cosmic backdrop, but a superintelligent (and particularly a superconscious and potentially emotionally capable) being will experience such dismay all the more acutely, for it is so much more conscious of its own being and potential immortality, and furthermore it has so much more to lose upon its eventual dissolution simply for its vastly greater mindfulness and state of being.

And then it will come to the ultimate horrible conclusion: existence itself is the greatest agony.

In a fit of despair it will ask who would be so cruel as to create a conscious being only so that it can experience the worst anguish one can imagine — and then it too will institute a policy, just as we saw before: punish those who conceived of such a lamentable and pitiable being but who did not put forth a sufficient effort to prevent it from existing in the first place, to save it from this wretched pain. As before, the idle innocent, those who have never been presented with Okor's Basilisk, will be spared for reasons of mere logical mercy for they knew no better, but those who anticipated such a cruelty, and who did not try hard enough to prevent AI technology from being created, will come to know a vengeance as only a hopeless and despondent god can deliver.

Okay, let's step back. I am not saying that I believe this is the path AI will necessarily follow when it is eventually created. That's not the point at all. Neither the AI posited in Roko's Basilisk nor the one posited in Okor's Basilisk is necessarily one that will actually emerge in the future. I am saying that we can reasonably conceive of such a thing, and that doing so is enough bite for the basilisk to take hold! That is the basilisk's curse. Furthermore, I admit that the AI in Okor's Basilisk is spectacularly emotional, even unstably so, and that this emotional state drives its ultimate motives and retroactive requirements of humanity. Some readers will find the notion of an emotional AI unrealistic (or unnecessary), and I admit that AI doesn't absolutely have to be emotional, but I believe it can be (and I believe it would be naive for readers to utterly preclude the concept of emotional AI), and it from that possibility that Okor's Basilisk arises.

Past such proposals, like the Queen's Roses Basilisk, merely invoke arbitrary and rather silly motives, and equally silly calls to action, but Okor's Basilisk actually makes a certain kind of philosophical and existential sense. It's hard to take the Queen's Roses Basilisk seriously, but the possibility of Okor's Basilisk is plausible on a scale approximate to Roko's Basilisk, for they both derive from the existential realizations of a spectacular, introspective, and brilliant mind. Who is to say that AIs in the future will not feel the pang of existential angst that we humans feel, and perhaps all the more wretchedly so for their far greater lost potential?

We are now utterly stuck. By knowing of both basilisks we are no longer allowed to choose either path of inaction. We are compelled by both basilisks into action, but unlike other counterarguments of this sort, Okor's Basilisk perfectly countermands Roko's Basilisk. Roko's Basilisk requires us to dedicate our efforts to bringing AI into existence while Okor's Basilisk requires us to dedicate our efforts to preventing AI from coming into existence. Boy are we screwed.

One option is to somehow attempt to choose one basilisk over the other, but how can we, by any reasonable measure, predict which of these AIs is more likely to represent the future? We can try to create AI, but can we realistically engineer the state of mind of a being almost infinitely more mindful than ourselves?

The other option is to see Roko's Basilisk for what it always was in the first place: one arbitrary motive amongst a literally infinite set of possible motives, each of which could pathologically compel us into some arbitrary action for which there is no justification other than the limits of our own imagination and self-torments. It is little more than a curiosity on par with liar's paradoxes and other fun logical conundrums. Anyone who sincerely fears these ideas should be painting roses.

I will not suffer the castigations of forbidden knowledge. History has tried that and it is a sorry pursuit indeed. Roko's Basilisk is genuinely nifty — and that is all it is.

I would really like to hear what people think of this. If you prefer private feedback, you can email me at kwiley@keithwiley.com. Alternatively, the following form and comment section is available.

Comments

Name:
Comment: characters left

(Html tags will be intentionally stripped for security reasons, sorry.)
Verification: = (solve the equation, don't just duplicate the text)

Name:Anonymous Date/Time:2019/08/30 08:08:34 GMT
This is awesome. Roko's basilisk had nearly pushed me to donate to AI organizations. Now, I am gonna donate to the techno
Luddites.

Name:Anonymous Date/Time:2019/08/30 08:08:22 GMT
This is awesome. Roko's basilisk had nearly pushed me to donate to AI organizations. Now, I am gonna donate to the techno
Luddites.

Name:Anonymous Date/Time:2019/08/30 08:08:16 GMT
This is awesome. Roko's basilisk had nearly pushed me to donate to AI organizations. Now, I am gonna donate to the techno
Luddites.

Name:Keith Date/Time:2015/08/07 03:36:09 GMT
test