tisdag 16 september 2014

Superintelligence odds and ends IV: Geniuses working on the control problem

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the fourth in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

The topic of Bostrom's Superintelligence is dead serious: the author believes the survival and future of humanity is at stake, and he may well be right. He treats the topic with utmost seriousness. Yet, his subtle sense of humor surfaces from time to time, diverting nothing from his serious intent, but providing bits of enjoyment for the reader. Here I wish to draw attention to a footnote which I consider a particularly striking example of Bostrom's way of exhibiting a slightly dry humor at the same time as he means every word he writes. What I have in mind is Footnote 10 in the book's Chapter 14, p 236. The context is a discussion on whether it improves or worsens the odds of a favorable outcome of an AI breakthrough with a fast takeoff (a.k.a. the Singularity) if, prior to that, we have performed transhumanistic cognitive enhancement of humans. As usual, there are pros and cons. Among the pros, Bostrom suggests that improved cognitive skills may make it easier for individual researchers as well as society as a whole to recognize the crucial importance of what he calls the control problem, i.e., the problem of how to turn an intelligence explosion into a controlled detonation with consequences that are in line with human values and favorable to humanity. And here's the footnote:
    Anecdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution, though there could be alternative explanations of this impression. If the field becomes fashionable, it will undoubtedly be flooded with mediocrities and cranks.
The community of researchers currently working seriously on the control problem is very small - if their head count even reaches the realm of two-digit numbers, it is not by much. Bostrom is one of its two most well-known members; the other is Eliezer Yudkowsky. I'd judge both of them to have cognitive capacities fairly far into the high end of "the intelligence distribution" (and I imagine myself to be in a reasonable position to calibrate - as a research mathematician, I know a fair number of people (including Fields Medalists) in various parts of that high end). Bostrom is undoubtedly aware of his own unusual talents, as well as of the strong social norm saying that one should not talk about one's own high intelligence, yet his devotion to honest unbisaed matter-of-fact presentation of what he perceives as the truth (always with uncertainty bars) leads him in this case to override the social norm.

I like that kind of honesty, even though it carries with it a nonnegligible risk of antagonizing others. Yudkowsky, in fact, has been known for going far - much further than Bostrom does here - in speaking openly about his own cognitive talents. And he does receive a good deal of shit for that, such as in Alexander Kruels's recent blogpost devoted to what he considers to be "Yudkowsky’s narcissistic tendencies".

All this makes the footnote multi-layered in a humorous kind of way. I also think the footnote's final sentence about what happens "if the field becomes fashionable" carries with it a nice touch of humor. Bostrom has a farily extreme propensity to question premises and conclusions, he is well aware of this, and I do think this last sentence (which points out a downside to what is clearly a main purpose of the book - namely to draw attention to the control problem) is written with a wink to that propensity.

4 kommentarer:

  1. Yudkowskys "Fun theory" or rather "theories of fun" are interesting. They're pretty close to my own thinking, but doesn't seem to have got loads of attention yet. It also doesn't seem like Yudkowski has related "Fun theory" very much to questions about friendly or less friendly AI.

    SvaraRadera
    Svar
    1. Thanks for the tip! Somehow I've managed to miss this part of Yudkowsky's writings.

      Radera
  2. As for Yudkowsky's writings, is not this this explanation of Bayes' theorem partially rather confusing? For example, he claims the following about Bayesian priors: "Actually, priors are true or false just like the final answer - they reflect reality and can be judged by comparing them against reality. For example, if you think that 920 out of 10,000 women in a sample have breast cancer, and the actual number is 100 out of 10,000, then your priors are wrong." This sounds like a frequentist interpretation of probability, but Bayesian reasoning with priors is connected with subjectivist interpretations (even if the theorem itself is uncontroversial), and the distinction is never made clear in the article.

    SvaraRadera