Saving the Steelman

 

Steelmanning is addressing the best form of the other person’s argument, even if it’s not the one they presented, but Ozy points out that in practice, it doesn’t work as well as intended. Perhaps Alice doesn’t understand Bob’s argument as well as she thinks she does, and ends up with a steelman that is, in fact, Bob’s original argument (I haven’t seen this myself). Or, and I have seen this, Bob comes up with the version of Alice’s argument that makes most sense to him, based on his premises and worldviews. But that’s still pretty valuable! It’s the skill of translating an argument from one basis to another, one worldview to another. Of course, not everything will translate, but it’s great if people push themselves to see if their premises allow them to accept an argument instead of just rejecting any argument built on different assumptions.

From Ozy’s comment section:

People don’t have to be stupid to be wrong, nor (and this is the heart of steelmanning) do they have to start with the same premises to come up with a worthwhile argument, even if it’s not great as presented.

While that’s a good personal habit, though, it might not be particularly useful in conversation, and neither is saying “I hear your argument. Here’s a better one.” All of that has some significant probability of conveying condescension.

Perhaps “real steelmanning is being able to put other people’s viewpoints in words they themselves find more compelling than their own arguments”, and that certainly sounds great. It’s a restatement of Rapoport’s first rule:

You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

As Ozy says, that’s hard and rare in conversation. And where Luke Muelhauser is seeing it is in papers written not from one thinker to another, but written by each to a general audience. So I think we’re eliding a set of important differences.

As always, things depend on context and on your goals.

  • Are you interested primarily in truth-seeking or a compassionate and full understanding of your interlocutor’s position?
  • Do you want to improve your model of the world or have access to new ones?
  • Do you want to improve your hedgehog skills or your fox skills?
  •  Are you in a conversation with the person you’re steelmanning or thinking about something you’ve read or heard or explaining something you’ve read or heard to a third party?
  • Are you interested in the best argument for a position from *your* perspective or *their* perspective?

There’s a flowchart waiting to be made.

IF you want to understand what an argument feels like from the inside, and appreciate the beauty and special-ness of someone’s position, and want to be able to engage really compassionately – whether in active conversation or in explaining a view to someone else – the Ideological Turing Test is for you. Do you really know what it’s like to believe that fetuses are morally equivalent to people? To believe that AI Risk is existentially important? To want to vote for Donald Trump? To really like Hillary Clinton as a candidate, and not be voting for her as a lesser evil?

I agree with Jonathan Nathan that anyone explaining a philosophical or religious position to someone for the first time, or who is in a position of the teacher, ought to present those positions as genuinely compelling, and the ITT can help. (Though it’s worth noting that in conveying that a position is actually plausible, affect and pathos may be as or more important than content) .(Also, you can absolutely convey the wonder of a belief from the outside, with lots of appreciative language – “The ritual observances of Orthodox Judaism have a beauty stemming from their long history”, but that may not make it sound plausible).

For your own thinking, ITT gives the chance to expand your thinking, have access to more models and generate new hypotheses, but it’s probably more important for your compassion, and the way it gives you a sense of what it’s like to think like someone else. It is a very good thing to understand where others are coming from, but it is also a good thing to not assume that the most understanding view is the correct one. ITT is less truth-seeking, more understanding-seeking. It’s about the value of other people’s beliefs and thought patterns, even if they’re not correct or true.

IF you hear an argument you think is wrong, but you don’t want to discount the possibility of the position being true, or there being value somewhere in the argumentation, steelmanning is your choice.

From Eliezer Yudkowsky’s facebook:

“Let me try to imagine a smarter version of this stupid position” is when you’ve been exposed to the Deepak Chopra version of quantum mechanics, and you don’t know if it’s the real version, or what a smart person might really think of the issue. It’s what you do when you don’t want to be that easily manipulated sucker who can be pushed into believing X by the manipulator making up a flawed argument for not-X that they can congratulate themselves on skeptically being smarter than. It’s not what you do in a respectful conversation.

From Ozy’s comment section:

tl;dr: IMHO, “steelmanning” is not great if you’re interested in why a particular person believes something. However, it is actually pretty great to test one’s own preconceptions, and to collect strong arguments when you’re interested in the underlying question.

Worth noting that in this case, you can work on creating or constructing better arguments yourself, either from your own position or from someone else’s (so closer to ITT), OR you can simply be charitable (I’ve often wondered how charity and steelmanning intersect) and assume better arguments exist, and then go find them. As Ozy says, “You don’t have to make up what your opponents believe! As it happens, you have many smart opponents!” Both are valuable. The former pushes you to think in new ways, to understand different hypotheses and think critically about the causal and logical consequences of premises. If you are very good at this, you might come up with an argument you wouldn’t have encountered otherwise. The latter inculcates more respect for the people who disagree with you and the body of knowledge and thought they’ve already created, and is likely to lead to a more developed understanding of that corpus, which will probably include arguments you would never have thought of. Both protect you from the inoculation effect.

More importantly, both push you to be a better and deeper thinker. Charity gives you an understanding of others’ thoughts and a respect and appreciation for them, but the bulk of the value is for yourself, and your own truth-seeking as you sort through countless arguments and ideas. If you start with different premises, you might make other people’s arguments better, but mostly this is about what makes the most sense to you, and discovering the most truthful and valuable insights in the midst of noise.

IF you thought, as I claimed originally, that this was all a way to have better conversations and you’re wondering where it’s all gone wrong, perhaps you are seeking collaborative conversations. If you’re finding that your conversations are mostly arguments rather than discussions, all the charity and steelmanning and ITT-ing in the world might not help you (though I’ve found that being really nice and reasonable sometimes seriously de-escalates a situation). It depends also on how willing your interlocutor is to do the same kind of things, and if the two (or more) of you are searching for truth and understanding together, many magical things can happen. You can explain your best understanding of their position from both your and their perspective, and they can update or correct you. They can supply evidence that you didn’t know that helps your argument. You can “double-crux” , a thing I just learned about at EA Global that CFAR is teaching. You can be honest about what you’re not sure about, and trust that no one will take it as an opportunity to gloat for points. You can point out places you agree and together figure out the most productive avenues of discourse. You can ask what people know and why they think they know it. This is probably the best way to get yourself to a point where you can steelman even within conversations. It’s both truth-seeking and understanding-seeking, fox-ish and hedgehog-ish, and if I’m making it sound like the best thing ever, that’s because I think it is.

There are many reasons to have less fun and less compassionate and less productive and less truth-finding conversations than these, because we live in an imperfect world. But if you can surround yourself with people who will do this with you, hold on tight.

 

Advertisements

You Want a Space for Political Incorrectness? You Got It

Last Sunday, I laid out what I thought a proper space for “politically incorrect” questions and opinions would look like, because such a space can go drastically, cruelly, wrong. Now, I’ve decided to make one. I’m making a subreddit where those questions and opinions can get answers.

There are many reasons people might have a question about race, sex, disability, or related issues they’re afraid to ask their friends, family or teachers. They may not know how to phrase it respectfully. They may have a question that they know will offend but that they’re desperate to know the answer to. They may actually be bigots who are looking to make people mad. For whatever reason, I think there should be a space where, if they abide by principles of respect, civility and good faith, they should get their questions answered. The subreddit I intend to create will be an educational and discussion-based place. Questions will be answered without judgement. Answers will explain how and why some actions or word are appropriate or not, and place questions of bigotry or prejudice in their proper academic, sociological, political, economic and historical context. They will inform and educate while minimizing harm to the relevant marginalized groups. They will include concrete tips, approaches and scripts, so as to really help people move forward in the world. They will be respectful, civil and charitable, perhaps far more charitable than what is deserved. After all, charity can be totally badass activism.

This will be its own space, with its own rules. I do not think these rules make sense elsewhere, nor should people have to abide by them elsewhere. But I like the idea of a place where everyone agrees to be just ridiculously civil and respectful, to use their emotional energy or their privilege or their desire to educate to great effect. This is not the only form of education and activism. There are many others, which are crucial and vital and must exist as well. But this is a form that I think there isn’t enough of. Tumblr upon tumblr will tell people that it is their job to educate themselves about social justice issues. That may be right. So this is one place they can do it.

Some of the rules:

  • No slurs unless you’re asking about them
  • Disrespectful/cruel/obnoxious questions and comments get deleted
  • Unhelpful/uncharitable/not-intended-to-educate responses get deleted, even if they’re completely correct
  • The mods enforce these rules and give users suggestions on how to be more respectful or helpful.

You can find more of the rules here and at the actual subreddit when it goes live.

If you think this is important and useful, if you agree largely with what I’ve written here, and you want to get involved, look out for the link when the subreddit goes live! And if you want to be even more involved, I want you to be a moderator for the subreddit. Just answer a few questions here, and if you have the same vision I do, you’re in!

I think this could do some real good. Here’s hoping!

———————————————————————————

P.S. If anyone is wondering why I think this is so important, here’s something I wrote in a blog post about Social Justice education some time ago:

I do not deny for a second that it can seem like a waste of time, that it can be painful, and that rather more often than we might hope, the people we’re arguing with are not arguing in good faith. That is why we leave it to individuals to decide whether it is worth their time and effort. But those not willing to do this kind of work should not stand in its way. They should not base their arguments on assumptions others do not share and be surprised when they are not understood. They should not make it more difficult for others to do the challenging work by interrupting ongoing conversations with jeering and mockery. And most of all, while there are perfectly good reasons to stop being able to have a conversation or to not enter one in the first place, no one should engage in arguments with people who might be persuaded if they have no intention of taking the process seriously. Ideas rise and fall every day in the public sphere, and there’s no reason to lose arguments or adherents because some don’t think the work of public reason is worth doing properly.

If you want to know more about my take on activism, social justice, better arguing and charity, check out these links:

The Dark Arts, or How to get more rational by taking online quizzes

Remember that quiz you all took? This one?

Let’s talk about it.

1. How long will this quiz take you?
I realize that it’s hard to estimate when you have no idea what the quiz is about, and I’m also aware that I may have skewed this by promising it would take under 10 minutes, but this question was meant to illustrate the planning fallacy. (A paper about it here and more information here). People tend to underestimate how much time something will take them, even if they have experience of going over time. This applies to a wide range of activities, from carpentry to origami, and does not apply to disinterested observers guessing about how long something will take someone else.




When we were discussing it at our club meeting, one of the club officers, Mike Mei, pointed out that this might also be an example of the Dunning-Kruger effect, in which unskilled (in a given area) people overrate their abilities in that area, essentially because they lack the knowledge to see where they have failed. There is a corrollary effect, in which skilled people underrate their abilities, because they spend time with people even more skilled than they are and have a better understanding of their own limitations.

The answers I got from SA were, in minutes: 10, 2, 10, 2, 5, 3, 2, 1 (possibly a 10), and blank. They all took between 3 and 7 minutes, so it’s possible the Dunning-Kruger effect was stronger than the planning fallacy here

(1.b.) On the original quiz, which I gave to the SA, the first question also had: How many questions do you expect to get right? which was meant to illustrate much the same points. I took this off since not all the questions have right and wrong answers.

2. Samantha was part of the Intervarsity Christian Fellowship in college and was abstinent until marriage. She has four children and does not use birth control. Is it more likely that she is a teacher, or a Christian and a teacher?

This is a spin on Jane being a feminist and a bank teller, which is a classic thought experiment/trick question in psychology. One example in a psychology presentation can be found here. It is a demonstration of the representativeness heuristic, in which people estimate probabilities of events by analyzing the data they have available to them, rather than by being aware of all the data they don’t have. In this case, people focus on the information I gave, which points strongly to Samantha being a Christian. This gives us a deviance from a Bayesian calculation, in some cases because we neglect the base rates of an event (this would be a prior), but in this case because of the conjugation fallacy. This fallacy occurs when we assume that a more restricted situation is more likely than a more general one. In particular, if we say that Samantha is more likely to be a Christian and a teacher, then we are claiming that the probability that Samantha is a Christian AND that Samantha is a teacher is less than the probability that Samantha is a teacher of any kind, which is clearly false. The wikipedia article has the math, but what you really need to see is this:

If A is the probability that Samantha is a teacher of any kind and B is the probability that Samantha is a Christian, we see that the overlap (C) can’t be larger than either A or B.

To my disappointment, SA answered with 2 saying teacher, 5 saying christian and teacher and two people rebelling against the two options I gave them to say:

“Just a Christian. Women must be oppressed and pregnant. Quote the Bible”
And
“The probabilities seem similar, though one should never take professed faith at face value.”

To be fair, these answers actually have a lot of merit. Even though it wasn’t the point of the question, it probably is much more likely that Samantha is a Christian than that she is either a teacher or both. Given that, it’s probably also true that the probabilities of her being a teacher and both being a teacher and a Christian are similar (if the probability of her being a Christian is high enough). Don’t believe me? Pick some probabilities at random and do the math! It’s just multiplication, I promise.

I find the last bit particularly intriguing. Perhaps this intrepid secularite is referring to the phenomenon of belief in belief?

By the way, if you were confused about all that Bayes talk, here’s a fairly simple explanation of Bayesian probability.


3. Do you think the percentage of countries in the UN that are African countries is higher or lower than 65%/10%? What is the percentage of countries in the UN that are African countries?

If you’ve already checked out both instantiations at the quiz, you probably realized this is one of the places they deviated. This is supposed to illustrate the anchoring effect, in which our analysis of what answer is reasonable is heavily affected by the information we’re given to start with. Sometimes this is because we adjust from that number, and sometimes because our brains remember information consistent with the number we start with. This can occur in context, as in this question or a starting bid for a salary, or out of context, as in spinning a Wheel of Fortune before answering the question (this link goes to a generally great paper). Crazy, isn’t it? But it’s true. It’s also worth pointing out that, despite the claim of some SA members that science people might be less prone to the fallacy than humanities people, even those who are reminded of the anchoring effect and told to avoid it are subject to it, at least when the anchor comes externally (as in this quiz). However, with internally created anchors (if I hadn’t given the first part of the question), warnings and high Need for Cognition do lower the extent of the effect.

SA Answers:
65%: 10%, 10%, 30%, 30%, 20% – Mean: 20%
10%: 18%, 52%, 8%, 13% – Mean: 22.75%

Oddly, the SA at UofC appears to be immune to the anchoring effect. Or something.

4. (5 seconds) Guess the value of the following arithmetical expression:  8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = ? OR 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 = ?

This is the same issue. People tend to look at the first few numbers, multiply, and then adjust, since they’re asked to do it in 5 seconds.

Answers from SA
Descending order:  400,000; 1,000; 1,024; 500  Average: 100,631 (obviously not particularly useful given the high variance). Without the outlier, the average is 841.33
Ascending order: 1,000; 900; 16,320 (calculated, not guessed); YAY MATH!; 4,000;   Average: 5555

Again, not entirely expected, but that’s ok.

Source: I got both of these questions straight from here: http://lesswrong.com/lw/j7/anchoring_and_adjustment/

5. 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she has breast cancer?

6. 1500 out of every 10,000 men at age forty who participate in routine screening have prostate cancer. 1300 of these 1500 men will get positive screening tests. 8,000 of the men who do not have prostate cancer will also get positive screening tests. A man in this age group had a positive test in a routine screening. What is the probability that he has prostate cancer?

You probably noticed that these are the same question, one with percentages, one with numbers. Apologies for the typo in the second question, by the way; that’s been fixed. These questions ask for a Bayesian calculation of probability. As someone has pointed out in the comments, it might seem like the test is asking for mathematical proficiency rather than rational abilities. I take the criticism willingly, but nothing on this quiz requires more than basic multiplication. Knowing how to set up a Bayesian calculation may be mathematical in some sense, but I would argue that it’s also simply something a rational person should know how to do, in the same way that calculating iterated probabilities of coin flips requires multiplication but if you think that the probability of getting heads at least once when you flip a coin twice is .5 + .5 = 1, there’s a problem that goes beyond arithmetic. This will become even more clear when I demonstrate the answer to this problem.


The way this works is as follows. We know that some people have cancer and some don’t, and some people get positive tests and some don’t. So we set up a table. The answers will be put in as (breast cancer problem numbers, prostate cancer problem numbers).

Has Cancer
Doesn’t Have Cancer
Positive Test
Negative Test


So for prostate cancer, the numbers are all given

Has Cancer
Doesn’t Have Cancer
Positive Test
( , 1300)
(, 8000)
Negative Test
(, 200)
( , 500)

For breast cancer, we have to do some calculations. For ease’s sake, let’s pick 10,000 as our total number of people (though it doesn’t matter). So of 10,000 women, 1%, or 100, have breast cancer, so our left column must add up to 100. 80% of these women will get a positive test, so 80 of them will and 20 won’t. Now we’re considering 9900 women who don’t have breast cancer, 9.6% of whom (or about 950) will get a positive test anyway leaving 8950 for the final quadrant.


Has Cancer
Doesn’t Have Cancer
Positive Test
(80, 1300)
(950, 8000)
Negative Test
(20, 200)
(8950, 500)


So if you get a positive test, you know you’re in the top row. If you’re a women who got a positive mammography, you have an 80/(80+950) = 7.76% chance of having cancer, and if you’re a man who got a positive test for prostate cancer, you have a 1300/(1300+8000) = 12.9% chance of having cancer.

From SA, who only got the breast cancer problem: 8/9 = 88%, 9.8%, 70.4%, 9.5%, 90.4%, 10%, 90.4%

Not to engage in scare rationalism here, but this is a problem. This means that women who go and get positive mammographies might be overestimating their probability of having cancer, and therefore undergoing possibly unnecessary biopsies, tests, chemotherapy, radiation, hospital visits, with the fear, stress and bills that come along with them, by an order of magnitude. Not good, people, not good.

And look what just came out: NYTimes: Considering When It Might Be Best Not to Know About Cancer


7. There are four cards on a table. Every card has one side which is white or black and one side with a number on it. The Rule: Every card with a white side must have an even number on the other side. How many cards (and which ones) must you flip in order to check if all four cards follow this rule?

8. You are an employee at an all age party venue, and people are allowed to come in with drinks. You see a group of four guys coming in, all carrying red Solo cups. One has an ID which says he’s 19, one is drinking orange juice, one is drinking beer, one has an ID which says he’s 24. Assuming you are accurate in your assessment of the drinks and all the ID’s are real, whose IDs/drinks do you check in addition to the information you already have to make sure no one is drinking illegally? 




Congratulations to whomever realized that these are the same problem. In  both cases you have four instantiations of an element of the problem with two pieces of information associated with it, only one of which you currently know. (4 cards, each has a number and a color; 4 people, each has a drink and an age). You are given a rule: If x, then y. If white card, then even number on opposing side. If drinking alcohol, must be 21. Now, if-then statements with x and y can be written four ways.
1. If x, then y is the original statement.
2. If y, then x is the converse.
3. If not x, then not y is the inverse.
4. If not y, then not x, is the contrapositive.

What you’ll notice if you’ve taken logic is that 1 and 4 are equivalent, and 2 and 3 are equivalent. So we have a rule, so we have to check it and its equivalent form. In these cases, if white then even must be checked (so check the white card) and if alcohol then at least 21 must be checked (so check the guy with beer), and also the contrapositive: if odd (not even), then black (not white), so you check the ‘9’ card, and if 19 (not at least = less than 21), not alcohol, so you check the 19 year old. The other ones don’t matter! So what if the even number has a black face on the other side? That’s like saying that if you’re 21 and above you must drink alcohol!

If you totally didn’t follow this, check out this link.

The cool thing about this question, called the Wason Selection Task, is that people are universally pretty bad at the card example and pretty good at the people example. The explanation given is that people are better at thinking about people and cheating (people possibly breaking rules) than abstract logical concepts. Maybe you agree?

SA Answers: Check the ‘2’ card, check the beer & juice; Check the ‘2’ and white cards, check the 19 year old’s drink and beer drinker’s ID; Check all the cards, check all the people except the 24 year old; Check the black card, and the one drinking orange juice and the one drinking beer; Check the ‘2’ card, check all of the people; Check the ‘2’ card, check the one drinking orange juice and the one drinking beer; Check all the cards, check the one drinking orange juice and the one drinking beer; Check the ‘2’ card, the ‘9’ card and the white card; check everyone’s drink.

Caveat: I phrased the question differently to them, and many thought that part of your job was to get the under 18’s out as well as check drinking legality, so you can’t draw that much from this sample. I would like to like to point out, though, that with the original Wason selection test part, no one got it right. Interesting…

9. A fair coin is tossed repeatedly until a tail appears, ending the game. The pot starts at 1 dollar and is doubled every time a head appears. You win whatever is in the pot after the game ends. Thus you win 1 dollar if a tail appears on the first toss, 2 dollars if a head appears on the first toss and a tail on the second, 4 dollars if a head appears on the first two tosses and a tail on the third, 8 dollars if a head appears on the first three tosses and a tail on the fourth, etc. This game can be played as many times as you wish (with a fixed fee paid every time). How much would you pay to enter this game?

I won’t lie, this one’s pretty math-y. Basically, when you’re deciding whether to take a bet, you should calculate something called expected value, that is, what do you expect to win? If you have a 50% chance of winning $2 and a 50% chance of winning nothing, then your expected value is .5*2 +.5*0 = 1, so you should be willing to pay a dollar or less (probably less, since people are loss averse).

Same thing applies. You have a 50% chance of winning $1 (if it’s tails the first time), then 25% of winning $2 (if it’s tails then heads), 12.5% of 4$, etc. The thing is that when you add up .5*1 + .25*2 + .125*4 + … you get .5 +.5 +.5 + .5 into infinity, and that adds up to infinity, which means the expected value from this bet is…infinity. Obviously there aren’t infinity dollars, and loss aversion plays a role here, but seriously, you should be willing to pay a lot of money to play this game. Crazy, huh?

SA Answers: $1, $1, $0.50, $3, 5 euro, $0, $2, $10, $2.


10. A magazine you’re interested in has three/two subscription options: Which do you choose?

Last one, promise. This one’s pretty simple. It’s all here, really: http://tomyumthinktank.blogspot.com/2008/03/economics-of-irrationality-relativism.html. the basic idea is, we see that print/online sells for the same price as just print, so we think it’s a better deal, so we’re way more likely to pick it than if that second option isn’t there to favorably compare the third one to. Think about it next time you go to a restaurant! Cool, huh?

In SA, of the group that got three questions, 1/5 chose the cheaper one and of the group that got two questions, 1/4 chose the cheaper one.

(Thanks to Mike Mei for pointing me to this question)

Thanks for sticking with me through all that. This was my first venture into quiz making. I welcome comments and criticisms in the comments. Please also tell me if you’d heard of the fallacies/biases before taking the test! If you’re interested in this stuff and want to try to become more rational, I recommend Less Wrong and this Wikipedia Page. There’s a whole amazing world out there!