Perspectives on intelligence explosion
(Redirected from Criticism of the sequences)
Jump to navigation
Jump to search
This page is to collect links to criticism of what seems to be the prevailing view on Less Wrong regarding the possibility of hard takeoff and the importance of FAI.
This thread has a more up-to-date list of critiques.
Criticism
- lukeprog's summary: "Criticisms of intelligence explosion"
- Thoughts on the Singularity Institute (SI) (now MIRI) from Holden Karnofsky. Responses: 1, 2, 3, 4
- Is The City-ularity Near?
- The Betterness Explosion.
- The Singularity Institute's Scary Idea (and Why I Don't Buy It). The author describes Nick Bostrom as another skeptic regarding the "scary idea"; you can see a question-and-answer session Nick gives on these topics here. (Tip for watching video lectures: you may wish to download the video using an extension for your browser and watch it sped up using VLC.)
- SIA says AI is no big threat. You can also read everything the author has written about AI.
- Evaluating the feasibility of SI's plan.
- Alexander Kruel's writing on MIRI and Less Wrong is generally critical.
- Three arguments against the singularity. Reaction from Robin Hanson.
- Is an Intelligence Explosion a Disjunctive or Conjunctive Event?
- Why an Intelligence Explosion might be a Low-Priority Global Risk.
- Risks from AI and Charitable Giving.
- Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased). Responses: 1, 2. (You may also wish to view the author's Less Wrong user page, as his criticisms of SI feature heavily there.)
- Should we discount extraordinary implications?
Debate
- The Hanson-Yudkowsky AI-Foom Debate, a text debate that occurred in late 2008.
- 2011 Hanson-Yudkowsky live AI Foom debate at Jane Street Capital. Hanson's summary of the debate and his arguments.
- John Baez Interviews Eliezer Yudkowsky. (Note: You'll have to scroll to the bottom to read the first part.) Although John finds Eliezer's claims of AI dangers plausible, he isn't persuaded to give up environmentalism in favor of working on FAI.
- A panel discussion on AGI.
- Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds. Other Bloggingheads discussions that have appeared on Less Wrong.
- GiveWell.org interviews SIAI. (Some Singularity Institute folks say they were poorly represented in this interview.)
- GiveWell interview with major SIAI donor Jaan Tallinn.
- Criticisms of intelligence explosion.
- Several Arguments For and Against Superintelligence/"Singularity".