Louie Helm




Entries
Pages: 1 3
Cause the Singularity (read all 21 entries…)
How to Build More Rationalists

Finished reading “The Craft and the Community” sequence on LessWrong.com
http://lesswrong.com/lw/cz/the_craft_and_the_community/

A bit meta but still lots of good info on how to do sanity checks that you’re not in an echo chamber or pursuing failed strategies.



Cause the Singularity (read all 21 entries…)
Seperating money from our feeling about money is not as easy as it seems like it should be

Read some (more!) of “The Craft and the Community” sequence on LessWrong.com
http://lesswrong.com/lw/cz/the_craft_and_the_community/

  1. Church vs. Taskforce
  2. Helpless Individuals
  3. Money: The Unit of Caring
  4. Purchase Fuzzies and Utilons Separately

Summarizes the frustration of raising money doing real good. Need to balance getting contributions without becoming cause snobs.

Read on the airplane ride over to San Francisco.



Cause the Singularity (read all 21 entries…)
Weak oganizations aren't much stronger than their strongest people

Read some of “The Craft and the Community” sequence on LessWrong.com
http://lesswrong.com/lw/cz/the_craft_and_the_community/

x # Raising the Sanity Waterline
x # A Sense That More Is Possible
x # Epistemic Viciousness
x # 3 Levels of Rationality Verification
x # Why Our Kind Can’t Cooperate
x # Tolerate Tolerance
x # You’re Calling Who A Cult Leader?
x # On Things That Are Awesome
x # Your Price For Joining
x # Can Humanism Match Religion’s Output?

Summarizes some of the coordination problems that have dogged rationalist communities in the past.



Cause the Singularity (read all 21 entries…)
How To Learn Friendly AI

Read How To Learn Friendly AI
http://sl4.org/wiki/HowToLearnFriendlyAI

and read Singularity Writing Advice
http://sl4.org/wiki/SingularityWritingAdvice

Would you like positive encouragement on how to join the field of Friendly AI and contribute to the Singularity? Great! Just don’t read “How To Learn Friendly AI” or “Singularity Writing Advice”.

Kidding…

Actually both have lots of good advice for ways to hold yourself to a higher standard if you really want to start helping. Man up and read them!

Coherent Extrapolated Volition
http://www.singinst.org/upload/CEV.html

The design of CEV reminds me a lot of the Turing test.

Alan Turing didn’t know how to define intelligence so he tried to define it in terms of itself. Indeed, the Turing Test looks like it solves much more than it does… so much so that thousands of researchers have wasted their careers thinking they understood intelligence when they really didn’t.

CEV feels similar. It’s the best mind in the field punting on a core problem in such an elegant way that it feels really satisfying. Or at least that’s how it appears to me on my first read through. Hopefully I just missed a few inference chains and CEV really is the answer to all our dreams.

Read Peter Voss’ Re: “SIAI’s Guidelines for building ‘Friendly’ AI”
http://optimal.org/peter/siai_guidelines.htm

A terse, dismissal of FAI theory circa 2001. I wonder if the author still finds FAI unnecessary and too much of a burden to integrate. He seems to simply look at his own AI architecture and point out if he’s currently following the intent of different ideas from CFAI versus considering the possibility of modifying his code in any way to accommodate it. I wonder if the fact that his architecture cannot support CFAI points to this AI not a serious enough AGI candidate to become a recursively self-improving unFriendly AI (making it therefore moot)??



Cause the Singularity (read all 21 entries…)
Read more mainline existential risk works by Nick Bostrom

HOW LONG BEFORE SUPERINTELLIGENCE?
http://www.nickbostrom.com/superintelligence.html

Rough estimation of the AI path to the Singularity. There are no apparent roadblocks in the next 20 years that can prevent some form of AI from becoming super-intelligent and believing otherwise requires an “optimistic” amount of pessimism. Clearly this paper formed the core material for “The Singularity is Near” by Ray Kurzweil.

Sleeping Beauty and Self-Location: A Hybrid Model
http://www.anthropic-principle.com/preprints/beauty/synthesis.pdf

Decision theory work that helps remove some potential paradoxes from the Sleeping Beauty problem. Feels like a hack on the first reading but maybe there is more elegance than I realize here.

Where Are They? Why I hope the search for extraterrestrial life finds nothing
http://www.nickbostrom.com/extraterrestrial.pdf

An interesting corollary to the “Great Silence” and “Great Filter”. The next rover mission to Mars (2011) will be much more interesting for me now. Can’t believe I’m now excited by hope for no life.

Pascal’s Mugging
http://www.nickbostrom.com/papers/pascal.pdf

Interesting decision theory work. Suggests non-infinite reasoning paradoxes that allow unbounded utility loss for agents that honestly compute using all known induction methods. Seems to require hacks or kludges to get around being mugged. Perhaps being mugged is correct? Feels similar to how 2-boxing on Newcomb problems is “correct”.

What is a Singleton?
http://www.nickbostrom.com/fut/singleton.html

Short note covering the definition and behavior of Singletons. There are reasons to believe that near-term future technology allows these social structures to become more likely.



Cause the Singularity (read all 21 entries…)
How to cause the Extinction of the Human Race

Read “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards” by Nick Bostrom

http://www.nickbostrom.com/existential/risks.html

Mostly covered information I’ve read in other places already.

Started thinking more about the “Great Filter” arguments. Incredibly interesting stuff.

For instance, if we were to find life on another planet, it would be a terrible sign that we can expect to go extinct before reaching a transhuman state. That’s because it implies we haven’t gone through the “Great Filter” yet. Meaning that our evolution wasn’t as improbable as we had thought.



Cause the Singularity (read all 21 entries…)
How to define intelligence

“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

Finished reading “Universal Intelligence: A Definition of Machine Intelligence” by Shane Legg.

It covers a lot of important ground in formally defining intelligence. It’s important to distill intelligence down instead of leaving it as a fuzzy idea with unsolved distinctions.

Although Universal Intelligence is not directly computable, the definition seems accurate and able to be approximated to arbitrarily accurate degrees probabilistically.


Also finished reading through the Zombie Subsequence at http://wiki.lesswrong.com/wiki/Zombies_%28sequence%29

x * Zombies! Zombies?
x * Zombie Responses
x * The Generalized Anti-Zombie Principle
x * GAZP vs. GLUT
x * Belief in the Implied Invisible
x * Zombies: The Movie

Basically, P-Zombies == Magic. Cleverly disguised magic, but still magic.

Interestingly, the Giant Look Up Table (GLUT) concept came up in both the zombie sequence and the unrelated intelligence paper I was reading… which was nice. I understand now how just as the Chinese Room has full understanding as a system, the GLUT would be conscious only by virtue of using the consciousness generated by a human who made the GLUT. So it’s as conscious as a cellphone that disembodies and repeats a human’s responses.



Cause the Singularity (read all 21 entries…)
Reality and your *beliefs* about reality are 2 different things

The Simple Truth – A good story to demonstrate why the truth doesn’t have to be “unknowable”
http://yudkowsky.net/rational/the-simple-truth

Read the entire Reductionism Sequence (except the Zombie sub-section which I am reading now)
http://wiki.lesswrong.com/wiki/Reductionism_%28sequence%29

Very clarifying sequence. I love how these posts bring you along one inferential step at a time along the process of re-discovering that truth doesn’t have to be “unknowable” or “irrelevant” and that multi-level mental maps of reality don’t correspond to multi-level reality (it’s all quarks… there are no fundamental “airplanes” in reality).

Main sequence

x * Dissolving the Question
x * Wrong Questions
x * Righting a Wrong Question
x * Mind Projection Fallacy
x * Probability is in the Mind
x * The Quotation is not the Referent
x * Qualitatively Confused
x * Reductionism
x * Explaining vs. Explaining Away
x * Fake Reductionism
x * Savanna Poets
x * Joy in the Merely Real
x * Joy in Discovery
x * Bind Yourself to Reality
x * If You Demand Magic, Magic Won’t Help
x o Mundane Magic
x * The Beauty of Settled Science
x * Amazing Breakthrough Day: April 1st
x * Is Humanism a Religion-Substitute?
x * Scarcity
x * To Spread Science, Keep It Secret
x * Initiation Ceremony
x * Awww, a Zebra

Joy in the Mearly Real (Sub-sequence)

x * Hand vs. Fingers
x * Angry Atoms
x * Heat vs. Motion
x * Brain Breakthrough! It’s Made of Neurons!
x * Reductive Reference
* The subsequence on Zombies
x * Excluding the Supernatural
x * Psychic Powers



Cause the Singularity (read all 21 entries…)
How to Create Friendly AI

Finished reading “Creating Friendly AI”

http://singinst.org/upload/CFAI/

Been reading it off and on for 3 months while I’ve read other AI and existential risk material.

There’s so much great stuff in this document.

It hammers away at all the anthropomorphic logical fallacies that can occur when reasoning about AI.

Makes a great case for structural vs content based thinking when designing an AI. If you want convergence towards desirable, Friendly AI, you need to build it with a flexible enough architecture that it can recover from several error classes reliably.

Also makes an excellent case for not using an adversarial mindset when creating an AI. Most proposed “rules of robotics” or other formulations of machine ethics have been completely anthropomorphic AND adversarial so the prospect of using them is more dangerous than helpful once it is thought through even a bit.

There are lots of subtle distinctions in all the sections on the different moral reasoning semantics used in each section. I hope to re-read these sections to the point where I could make useful improvements to the general FAI theory presented here.



Cause the Singularity (read all 21 entries…)
Public Information about Nanotechnology in Australia

I found something weird the other day in Australia. There are these goofy postcards in public places you can use to request information on nanotech. Huh?

I was curious why this service even existed and what exactly they’re sending people. So I sent away for my packet! Turns out it’s these people:

http://technyou.edu.au/

And their literature states (repeatedly) that they provide “balanced and factual information [..] to help the public make informed choices about nanotechnology”.

In reality, it’s a bunch of shiny brochures about how MINDBLOWINGLY AMAZING NANOTECH is*! *Oh yeah, put the word “risk” on this page once…. there, BALANCED!

It’s great. There’s smiling women, cute pets and talk about curing cancer… then in the risk sections (tucked on the back), important information like like “all technologies have benefits and risks which need to be carefully considered by the public”. The few concrete risks stated, like nanoparticles being small enough to penetrate any human cell, are all massively weakened by random filler clauses like “Some people say nanoparticles are small enough to penetrate any human cell. Right now, it’s hard to know what to believe. Researchers are looking into it.”

So I guess the politicking on nanotech has already begun in earnest here in Australia. They aren’t gonna be caught off guard with a valuable product to sell and some consumer group opposing it. Or maybe they’re just terrible at their job of finding balanced material. It’s probably more interesting to look up all the good things nanotech can do. But by the time they finished their brochures, they were left with material that makes Ray Kurzweil sound like a Luddite.

Somewhere at TechNYou headquarters, there’s a guy spinning around in his swivel chair, proud that he’s de-biased all the retards who’ve read crazy websites that say “NANO is the devil!”. Of course, he’s really just creating new “NANO is our new GOD!” zealots to mindlessly oppose them.

I wonder if the TechNYou guys would read a few LessWrong sequences if I forwarded them?



Cause the Singularity (read all 21 entries…)
The map is not the territory, but you can't fold up the territory and put it in your glove compartment.

Read “Technical Explanation of Technical Explanation” http://yudkowsky.net/rational/technical

Similar to other info on lesswrong.com but still helpful

Half way through “Universal Intelligence: A Definition of Machine Intelligence” http://www.vetta.org/documents/UniversalIntelligence.pdf

  • Quite interesting *

Also read some prelim posts on the Reductionism sequence http://wiki.lesswrong.com/wiki/Reductionism_%28sequence%29

Early posts out of sequence:

x * Universal Fire
x * Universal Law

Main sequence

x * Dissolving the Question



Cause the Singularity (read all 21 entries…)
It's not a crisis of faith unless it could have gone either way

Finished reading the “How To Actually Change Your Mind” Sequence at http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind

Overly Convenient Excuses

x * The Proper Use of Humility
x * The Third Alternative
x * Privileging the Hypothesis (and its requisites, like Locating the hypothesis)
x * But There’s Still A Chance, Right?
x o The Fallacy of Gray
x o Absolute Authority
x o How to Convince Me That 2 + 2 = 3
x o Infinite Certainty
x o 0 And 1 Are Not Probabilities

Letting Go

x * Feeling Rational
x * The Importance of Saying “Oops”
x o The Crackpot Offer
x o Just Lose Hope Already
x * The Proper Use of Doubt
x * You Can Face Reality
x * The Meditation on Curiosity
x * Something to Protect
x * No One Can Exempt You From Rationality’s Laws
x * Leave a Line of Retreat
x * Crisis of Faith
x o The Ritual (short story)

Eliezer has tons of great points in these posts. I found this sequence to be of general high quality. Some of the posts summarize thoughts I’ve believed in a fuzzy or haphazard way before so I appreciate reading them from a new, more precise point of view. Other posts are exposing me to great new ideas and ways of thinking which I am already grateful for. It feels good to be getting stronger.

I’m still skeptical of some ideas though. His blanket call for people to be less confident and instead be better Bayesian reasoners seem to ignore the actual cognitive effect of putting his advice into practice. Specifically,

1) Our monkey brains are biased towards internal self-doubt (even if they externally profess over-confidence to shore that up).

2) Our monkey brain can’t help but believe some of every wrong idea it’s presented with… even ideas with no evidence (anchoring)! So actively disbelieving all incoming information and only considering it after dismissing it is more or less required to stay sane in our world, given our brains.

3) Being correct is frequently a minority position. So the most correct people face the most social pressure to update their beliefs back to incorrect, social norms. Being less confident in general leaves correct (monkey brain) reasoners both disadvantaged and generally worse off.

4) The vast majority of people just don’t have experience doing anything more difficult than dealing with the fallout of a lifetime of their own failure/stupidity. So when you’re attempting to actually succeed at something exceptional in your own life, expect all external social feedback to be either incorrect or correct only by accident.

Continuing on to the Reductionism Sequence http://wiki.lesswrong.com/wiki/Reductionism_%28sequence%29

Main sequence
  • Dissolving the Question
  • Wrong Questions
  • Righting a Wrong Question
  • Mind Projection Fallacy
  • Probability is in the Mind
  • The Quotation is not the Referent
  • Qualitatively Confused
  • Reductionism
  • Explaining vs. Explaining Away
  • Fake Reductionism
  • Savanna Poets
  • The subsequence Joy in the Merely Real (main post here)
  • Hand vs. Fingers
  • Angry Atoms
  • Heat vs. Motion
  • Brain Breakthrough! It’s Made of Neurons!
  • Reductive Reference
  • The subsequence on Zombies
  • Excluding the Supernatural
  • Psychic Powers


Cause the Singularity (read all 21 entries…)
I guess wrong ideas are more interesting than no ideas

Finished reading and considering “The Nature of Self-Improving Artificial Intelligence” by Steve Omohondro. http://selfawaresystems.files.wordpress.com/.../nature_of_self_improving_ai.pdf

I’m sure this paper attracts lots of skepticism for all the wrong reasons. People are probably uncomfortable grappling with the conclusions. Still, I think some real skepticism is in order. The conjecture that there is an inherent drive for self-improving systems to become creative is suspect. It sounds preferable and it’s possibly true, but there is no support in the paper for it. The author merely presents a vague discussion about why he believes it would be nice for it to be correct.

I see this paper as a kindly provided anchor in thought-space from which to make adjustments about how AIs or other self-improving systems will actually behave. Something much better than “No one has any idea how an intelligent AI might behave!” but something less than good science. At least it’s an interesting, incorrect, intermediate idea that can inspire others to make something similar which is actually correct. I respect the author for at least having the balls to publish something so clearly lacking rather than sit on it for 15 years until it is “ready”. It’s good to help move the conversation along. And since the ideas in it are so narrowly upper-middle-class American, it’s good they are brought into the light so they can be publicly exposed and rebuked by a more robust theory.

Going through some of the papers referenced in Steve’s paper, I’ve decided to read Marcus Hutter’s “The Fastest and Shortest Algorithm for All Well-Defined Problems” http://www.hutter1.net/ai/pfastprg.htm

Still reading a lot of http://lesswrong.com as well…



Cause the Singularity (read all 21 entries…)
How to stop rationalizing

Read more of the LessWrong.com sequence “How To Actually Change Your Mind” http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind

Been enjoying reading the follow-up comments too.

Against Rationalization

x * Knowing About Biases Can Hurt People
x * Conservation of Expected Evidence
x * Update Yourself Incrementally
x * One Argument Against An Army
x * The Bottom Line
x * What Evidence Filtered Evidence?
x * Rationalization
x * A Rational Argument
x * Avoiding Your Belief’s Real Weak Points
x * Motivated Stopping and Motivated Continuation
x o A Case Study of Motivated Continuation
x * Fake Justification
x * Fake Optimization Criteria
x * Is That Your True Rejection?
x * Entangled Truths, Contagious Lies
x o Of Lies and Black Swan Blowups
x * Anti-Epistemology
x o The Sacred Mundane

Against Doublethink

x * Belief in Belief
x * Singlethink
x * Doublethink: Choosing to be Biased
x * No, Really, I’ve Deceived Myself
x * Belief in Self-Deception
* Moore’s Paradox
* Don’t Believe You’ll Self-Deceive

Overly Convenient Excuses
x * The Meditation on Curiosity
* Something to Protect
* No One Can Exempt You From Rationality’s Laws
* Leave a Line of Retreat
* Crisis of Faith
o The Ritual (short story)

  • The Proper Use of Humility
    * The Third Alternative
    * Privileging the Hypothesis (and its requisites, like Locating the hypothesis)
    * But There’s Still A Chance, Right?
    o The Fallacy of Gray
    o Absolute Authority
    o How to Convince Me That 2 + 2 = 3
    o Infinite Certainty
    o 0 And 1 Are Not Probabilities
Letting Go
  • Feeling Rational
  • The Importance of Saying “Oops”
    o The Crackpot Offer
    o Just Lose Hope Already
  • The Proper Use of Doubt
  • You Can Face Reality

Also read section 1 & 2 of “The Nature of Self-Improving Artificial Intelligence” http://selfawaresystems.files.wordpress.com/.../nature_of_self_improving_ai.pdf

Long list of conjectures presented as chains of sound reasoning. Was excited when I saw the section on “Time Discounting” only to be disappointed again when Omohondro didn’t actually address why an AI would feel compelled to act in a world where they have infinite time. Premature action seems like a huge mistake for almost any intelligence that is unbound by time/death. Perhaps only a small set of very perverse utility functions such as the ones Omohondro imagines would cause rash, harmful action… not all such systems.



Cause the Singularity (read all 21 entries…)
You Can't Not Believe Everything You Read

Today I read “The Basic AI Drives”. It turned out to be only a 10 page, non-technical summary of the paper I actually wanted to read by Steve Omohondro. Hopefully “The Nature of Self-Improving Artificial Intelligence” is more well-reasoned and scientific. It seems bizarre to me that AI should necessarily develop psychopathic economic drives. Especially when they use probabilistic reasoning + meta-reasoning and have such drastically differing longevity. It seems just as likely to propose that a prolonged state of hesitation could proceed any action while the AI gained enough cognitive capacity to evaluate and formulate dominant strategies that could achieve its goals even in the face of dealing with unlikely branches of planning algorithms that will need to account for potentially every piece of information it knows being incorrect. In other words, the intelligence could just as easily turn into an overly contemplative nerd who is paralyzed by fear of doing anything wrong… so it does nothing… until hopefully it passes a threshold of deepened understanding. Given true probabilistic reasoning, this seems more likely than the proposed auto-sociopath explanation. But I’ll read Omohondro’s paper before judging its conclusions too harshly (or early). Perhaps he considered these premises already.

Also re-read “An Intuitive Explanation of Bayes’ Theorem” http://yudkowsky.net/rational/bayes

Felt like I gained more understanding this time though and currently remember enough that I could derive Bayes again if I forgot it. Hope this understanding is long-lasting. It seems mundane again which is great.

Read more of the LessWrong.com sequence “How To Actually Change Your Mind”

Death Spirals and the Cult Attractor

Most important posts:

x * The Affect Heuristic
x * The Halo Effect
x * Affective Death Spirals
x * Resist the Happy Death Spiral
x * Uncritical Supercriticality

Seeing with Fresh Eyes

Most important posts:

x * Anchoring and Adjustment
x * We Change Our Minds Less Often Than We Think
x * Hold Off On Proposing Solutions
x * Do We Believe Everything We’re Told?
x * Cached Thoughts
x * Asch’s Conformity Experiment
x * Lonely Dissent
x * The Genetic Fallacy

Noticing Confusion

(Heavy overlap with Mysterious Answers to Mysterious Questions.)

x   * Your Strength as a Rationalist
x * Absence of Evidence Is Evidence of Absence
x * Hindsight Bias
x * Hindsight Devalues Science
x * Positive Bias: Look Into the Dark

Against Rationalization

x * Knowing About Biases Can Hurt People
x * Conservation of Expected Evidence
* Update Yourself Incrementally
* One Argument Against An Army
* The Bottom Line
* What Evidence Filtered Evidence?
* Rationalization
* A Rational Argument
* Avoiding Your Belief’s Real Weak Points
* Motivated Stopping and Motivated Continuation
o A Case Study of Motivated Continuation
* Fake Justification
* Fake Optimization Criteria
* Is That Your True Rejection?
* Entangled Truths, Contagious Lies
o Of Lies and Black Swan Blowups
* Anti-Epistemology
o The Sacred Mundane

Planning to read “The Nature of Self-Improving Artificial Intelligence” and “A Technical Explanation” next.



Cause the Singularity (read all 21 entries…)
How to sytematically reduce your own bias

Continuing to read the post sequence “How To Actually Change Your Mind” http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind

Politics is the Mind-Killer

Most important posts:

x * A Fable of Science and Politics
x * Politics is the Mind-Killer
x * Policy Debates Should Not Appear One-Sided
x * The Scales of Justice, the Notebook of Rationality
x * Reversed Stupidity is Not Intelligence
x * Argument Screens Off Authority
x * Hug the Query

Death Spirals and the Cult Attractor

Most important posts:

x * The Affect Heuristic
x * The Halo Effect
x * Affective Death Spirals
x * Resist the Happy Death Spiral
* Uncritical Supercriticality



Cause the Singularity (read all 21 entries…)
How to stop worshiping your own ignorance

Finished reading “Mysterious Answers to Mysterious Questions” at http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions

x * 1.19 The Futility of Emergence
x * 1.20 Say Not “Complexity”
x * 1.21 Positive Bias: Look Into the Dark
x * 1.22 My Wild and Reckless Youth
x * 1.23 Failing to Learn from History
x * 1.24 Making History Available
x * 1.25 Explain/Worship/Ignore?
x * 1.26 “Science” as Curiosity-Stopper
x * 1.27 Applause Lights
x * 1.28 Chaotic Inversion

Also read the “37 Ways That Words Can Be Wrong” top-level summary and skimmed some interesting sub-topics http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/

Moving on to “How To Actually Change Your Mind” http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind

Politics is the Mind-Killer

Most important posts:

x * A Fable of Science and Politics
x * Politics is the Mind-Killer
x * Policy Debates Should Not Appear One-Sided
x * The Scales of Justice, the Notebook of Rationality
* Reversed Stupidity is Not Intelligence
* Argument Screens Off Authority
* Hug the Query

Death Spirals and the Cult Attractor

Most important posts:

  • The Affect Heuristic
  • The Halo Effect
  • Affective Death Spirals
  • Resist the Happy Death Spiral
  • Uncritical Supercriticality

Seeing with Fresh Eyes

Most important posts:

  • Anchoring and Adjustment
  • We Change Our Minds Less Often Than We Think
  • Hold Off On Proposing Solutions
  • Do We Believe Everything We’re Told?
  • Cached Thoughts
  • Asch’s Conformity Experiment
  • Lonely Dissent
  • The Genetic Fallacy

Against Rationalization
x * Conservation of Expected Evidence
* Update Yourself Incrementally
* One Argument Against An Army
* The Bottom Line
* What Evidence Filtered Evidence?
* Rationalization
* A Rational Argument
* Avoiding Your Belief’s Real Weak Points
* Motivated Stopping and Motivated Continuation
o A Case Study of Motivated Continuation
* Fake Justification
* Fake Optimization Criteria
* Is That Your True Rejection?
* Entangled Truths, Contagious Lies
o Of Lies and Black Swan Blowups
* Anti-Epistemology
o The Sacred Mundane

  • Knowing About Biases Can Hurt People

Against Doublethink

x * Belief in Belief
* Singlethink
* Doublethink: Choosing to be Biased
* No, Really, I’ve Deceived Myself
* Belief in Self-Deception
* Moore’s Paradox
* Don’t Believe You’ll Self-Deceive

Overly Convenient Excuses

  • The Proper Use of Humility
  • The Third Alternative
  • Privileging the Hypothesis (and its requisites, like Locating the hypothesis)
  • But There’s Still A Chance, Right?
    o The Fallacy of Gray
    o Absolute Authority
    o How to Convince Me That 2 + 2 = 3
    o Infinite Certainty
    o 0 And 1 Are Not Probabilities

Letting Go

  • Feeling Rational
  • The Importance of Saying “Oops”
    o The Crackpot Offer
    o Just Lose Hope Already
  • The Proper Use of Doubt
  • You Can Face Reality
  • The Meditation on Curiosity
  • Something to Protect
  • No One Can Exempt You From Rationality’s Laws
  • Leave a Line of Retreat
  • Crisis of Faith
    o The Ritual (short story)


Cause the Singularity (read all 21 entries…)
Your beliefs need to predict things that *can't* happen

Read and took notes today on: “Artificial Intelligence as a Positive and Negative Factor in Global Risk” http://yudkowsky.net/singularity/ai-risk

Making good progress on the LW Sequence “Mysterious Answers to Mysterious Questions” at http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions

x * 1.1 Making Beliefs Pay Rent (in Anticipated Experiences)
x * 1.2 Belief in Belief
x * 1.3 Bayesian Judo
x * 1.4 Professing and Cheering
x * 1.5 Belief as Attire
x * 1.6 Focus Your Uncertainty
x * 1.7 The Virtue of Narrowness
x * 1.8 Your Strength As A Rationalist
x * 1.9 Absence of Evidence is Evidence of Absence
x * 1.10 Conservation of Expected Evidence
x * 1.11 Hindsight Bias
x * 1.12 Hindsight Devalues Science
x * 1.13 Fake Explanations
x * 1.14 Guessing the Teacher’s Password
x * 1.15 Science as Attire
x * 1.16 Fake Causality
x * 1.17 Semantic Stopsigns
x * 1.18 Mysterious Answers to Mysterious Questions
* 1.19 The Futility of Emergence
* 1.20 Say Not “Complexity”
* 1.21 Positive Bias: Look Into the Dark
* 1.22 My Wild and Reckless Youth
* 1.23 Failing to Learn from History
* 1.24 Making History Available
* 1.25 Explain/Worship/Ignore?
* 1.26 “Science” as Curiosity-Stopper
* 1.27 Applause Lights
* 1.28 Chaotic Inversion



Cause the Singularity (read all 21 entries…)
Existential Risk Reduction - A Foundational Reading List

I’m working to deepen my knowledge of existential risk reduction.

These are books and papers which I had recommended to me by a researcher in the field. I’ve read the two popular books. I plan to read and reflect on the rest of the material during the next month.

x Cialdini’s book “Influence” [read in Lake Zurich in 1998(!)]
x Singularity Is Near: skim [read in Bali + Japan in 2009]
x “Cognitive Biases Potentially Affecting Judgment of Global Risks” http://yudkowsky.net/rational/cognitive-biases [read in Australia on March 30, 2010]
“Artificial Intelligence as a Positive and Negative Factor in Global Risk” http://yudkowsky.net/singularity/ai-risk
x An Intuitive Explanation http://yudkowsky.net/rational/bayes [read once—need to re-read]
x A Technical Explanation http://yudkowsky.net/rational/technical [read once—need to re-read]
x “Are you living in a computer simulation?” Nick Bostrom http://www.simulation-argument.com/simulation.html [read in Australia on March 31, 2010]
Nick Bostrom titles + abstracts http://www.nickbostrom.com/
Robin Hanson titles + abstracts http://hanson.gmu.edu/
Stephen Omohundro’s paper “The Basic AI drives” http://portal.acm.org/citation.cfm?id=1566226
Russel and Norvig “AI: A Modern Approach” (3rd Edition) http://books.google.com/books?id=8jZBksh-bUMC
Eliezer’s post sequences http://wiki.lesswrong.com/wiki/Sequences



Give 1000 cheers (read all 6 entries…)
900 cheers

Only 100 cheers left to reach my goal of giving 1000 cheers. :)



Entries
Pages: 1 3

 

43 Things Login