Saving the Steelman

 

Steelmanning is addressing the best form of the other person’s argument, even if it’s not the one they presented, but Ozy points out that in practice, it doesn’t work as well as intended. Perhaps Alice doesn’t understand Bob’s argument as well as she thinks she does, and ends up with a steelman that is, in fact, Bob’s original argument (I haven’t seen this myself). Or, and I have seen this, Bob comes up with the version of Alice’s argument that makes most sense to him, based on his premises and worldviews. But that’s still pretty valuable! It’s the skill of translating an argument from one basis to another, one worldview to another. Of course, not everything will translate, but it’s great if people push themselves to see if their premises allow them to accept an argument instead of just rejecting any argument built on different assumptions.

From Ozy’s comment section:

People don’t have to be stupid to be wrong, nor (and this is the heart of steelmanning) do they have to start with the same premises to come up with a worthwhile argument, even if it’s not great as presented.

While that’s a good personal habit, though, it might not be particularly useful in conversation, and neither is saying “I hear your argument. Here’s a better one.” All of that has some significant probability of conveying condescension.

Perhaps “real steelmanning is being able to put other people’s viewpoints in words they themselves find more compelling than their own arguments”, and that certainly sounds great. It’s a restatement of Rapoport’s first rule:

You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

As Ozy says, that’s hard and rare in conversation. And where Luke Muelhauser is seeing it is in papers written not from one thinker to another, but written by each to a general audience. So I think we’re eliding a set of important differences.

As always, things depend on context and on your goals.

  • Are you interested primarily in truth-seeking or a compassionate and full understanding of your interlocutor’s position?
  • Do you want to improve your model of the world or have access to new ones?
  • Do you want to improve your hedgehog skills or your fox skills?
  •  Are you in a conversation with the person you’re steelmanning or thinking about something you’ve read or heard or explaining something you’ve read or heard to a third party?
  • Are you interested in the best argument for a position from *your* perspective or *their* perspective?

There’s a flowchart waiting to be made.

IF you want to understand what an argument feels like from the inside, and appreciate the beauty and special-ness of someone’s position, and want to be able to engage really compassionately – whether in active conversation or in explaining a view to someone else – the Ideological Turing Test is for you. Do you really know what it’s like to believe that fetuses are morally equivalent to people? To believe that AI Risk is existentially important? To want to vote for Donald Trump? To really like Hillary Clinton as a candidate, and not be voting for her as a lesser evil?

I agree with Jonathan Nathan that anyone explaining a philosophical or religious position to someone for the first time, or who is in a position of the teacher, ought to present those positions as genuinely compelling, and the ITT can help. (Though it’s worth noting that in conveying that a position is actually plausible, affect and pathos may be as or more important than content) .(Also, you can absolutely convey the wonder of a belief from the outside, with lots of appreciative language – “The ritual observances of Orthodox Judaism have a beauty stemming from their long history”, but that may not make it sound plausible).

For your own thinking, ITT gives the chance to expand your thinking, have access to more models and generate new hypotheses, but it’s probably more important for your compassion, and the way it gives you a sense of what it’s like to think like someone else. It is a very good thing to understand where others are coming from, but it is also a good thing to not assume that the most understanding view is the correct one. ITT is less truth-seeking, more understanding-seeking. It’s about the value of other people’s beliefs and thought patterns, even if they’re not correct or true.

IF you hear an argument you think is wrong, but you don’t want to discount the possibility of the position being true, or there being value somewhere in the argumentation, steelmanning is your choice.

From Eliezer Yudkowsky’s facebook:

“Let me try to imagine a smarter version of this stupid position” is when you’ve been exposed to the Deepak Chopra version of quantum mechanics, and you don’t know if it’s the real version, or what a smart person might really think of the issue. It’s what you do when you don’t want to be that easily manipulated sucker who can be pushed into believing X by the manipulator making up a flawed argument for not-X that they can congratulate themselves on skeptically being smarter than. It’s not what you do in a respectful conversation.

From Ozy’s comment section:

tl;dr: IMHO, “steelmanning” is not great if you’re interested in why a particular person believes something. However, it is actually pretty great to test one’s own preconceptions, and to collect strong arguments when you’re interested in the underlying question.

Worth noting that in this case, you can work on creating or constructing better arguments yourself, either from your own position or from someone else’s (so closer to ITT), OR you can simply be charitable (I’ve often wondered how charity and steelmanning intersect) and assume better arguments exist, and then go find them. As Ozy says, “You don’t have to make up what your opponents believe! As it happens, you have many smart opponents!” Both are valuable. The former pushes you to think in new ways, to understand different hypotheses and think critically about the causal and logical consequences of premises. If you are very good at this, you might come up with an argument you wouldn’t have encountered otherwise. The latter inculcates more respect for the people who disagree with you and the body of knowledge and thought they’ve already created, and is likely to lead to a more developed understanding of that corpus, which will probably include arguments you would never have thought of. Both protect you from the inoculation effect.

More importantly, both push you to be a better and deeper thinker. Charity gives you an understanding of others’ thoughts and a respect and appreciation for them, but the bulk of the value is for yourself, and your own truth-seeking as you sort through countless arguments and ideas. If you start with different premises, you might make other people’s arguments better, but mostly this is about what makes the most sense to you, and discovering the most truthful and valuable insights in the midst of noise.

IF you thought, as I claimed originally, that this was all a way to have better conversations and you’re wondering where it’s all gone wrong, perhaps you are seeking collaborative conversations. If you’re finding that your conversations are mostly arguments rather than discussions, all the charity and steelmanning and ITT-ing in the world might not help you (though I’ve found that being really nice and reasonable sometimes seriously de-escalates a situation). It depends also on how willing your interlocutor is to do the same kind of things, and if the two (or more) of you are searching for truth and understanding together, many magical things can happen. You can explain your best understanding of their position from both your and their perspective, and they can update or correct you. They can supply evidence that you didn’t know that helps your argument. You can “double-crux” , a thing I just learned about at EA Global that CFAR is teaching. You can be honest about what you’re not sure about, and trust that no one will take it as an opportunity to gloat for points. You can point out places you agree and together figure out the most productive avenues of discourse. You can ask what people know and why they think they know it. This is probably the best way to get yourself to a point where you can steelman even within conversations. It’s both truth-seeking and understanding-seeking, fox-ish and hedgehog-ish, and if I’m making it sound like the best thing ever, that’s because I think it is.

There are many reasons to have less fun and less compassionate and less productive and less truth-finding conversations than these, because we live in an imperfect world. But if you can surround yourself with people who will do this with you, hold on tight.

 

One thought on “Saving the Steelman

  1. […] to steelmanning” are exactly what steelmanning is intended to be. Chana Messinger’s response to Ozy provides further discussion and I think both parties make some excellent points. If used […]

Leave a comment