Quantcast
Channel: Current Affairs – digital digs
Viewing all articles
Browse latest Browse all 32

How do you think rhetoric works?

$
0
0

A recent article by Elizabeth Kolbert in The New Yorker seeks to explain “Why Facts Don’t Change Our Minds.” The article is in reference to several new books written by cognitive scientists. The first, by Hugo Mercier and Dan Sperber, called The Enigma of Reason recounts numerous psychological studies examining the various ways in which people hold on to their views even when presented with evidence that those views are totally incorrect. This includes familiar problems like confirmation bias and forms the groundwork for familiar pieces of advice such as the importance of making a good first impression. Mercier and Sprerber’s contribution to this topic is to provide a kind of evolutionary explanation for why human minds work this way.

Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

(I have no idea what the “ö” is about.) And what are those problems? Essentially “to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.”

I’ll get back to that in a second.

The article then turns to another book by Steven Sloman and Philip Fernbach called The Knowledge Illusion: Why We Never Think Alone and specifically a concept they term “the illusion of explanatory depth.” Their first example is the toilet. Most people imagine they know how a toilet works, but it turns out to be quite complex. As I would put this, this is how technologies, discourses, and institutions are meant to function. They expand our capacities for thought and agency by embedding these capacities into networks. I do not need to know how to build a computer or a network in order to write a blog post. That’s how knowledge works among bees, you know? A bee finds a flower and it can instruct another bee through a dance where to find that flower. But the bee that learns that dance can’t teach it to another bee. (I’m channeling Kittler here, I think.) For us, information works differently. Technologies work differently. I’m not exactly sure what the illusion is however. Do people really think they know how the technologies around them work? Kostler goes on to bring this to a kairotic moment:

If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

One more step. Kostler turns to a third book by Jack and Sara Gorman, Denying to the Grave: Why We Ignore the Facts That Will Save Us, which tries to figure out how to overcome problems like confirmation bias and its physiological foundations (as they see it). It turns out not to be that easy. As Kostler concludes, “Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science.”

Gee, that is a poser. But maybe we can start with some of the built-in confirmation biases at work here.

  1. Reason doesn’t work the way that it is imagined to function here.
  2. Because reason doesn’t work that way, science doesn’t work as it is imagined here either.
  3. If you have a poor model of science and reason then it isn’t going to be very effective in addressing this concern about how people become convinced and then hold on to views in the face of compelling evidence to the contrary.

Let’s return to Kostler’s ACA example and insert the most inane version of it. Let’s say I am opposed to “Obamacare” (because I hate anything with Obama’s name attached to it) but have no idea that Obamacare and the ACA are the same thing. I rely on the ACA and I’m happy with it, but I hate Obamacare and what it done away with. Can you get any stupider than that? I don’t know. Are there warnings on gas pumps not to drink the gasoline? Despite this, this imagined person’s position is not “baseless.”  There is reasoning. It’s a straightforward syllogism.

  1. I hate all things related to Obama.
  2. Obamacare is related to Obama.
  3. Socrates needed better health care.

Maybe when this person figures out ACA and Obamacare are the same thing, that opinion shifts, but perhaps not as far as you’d think. This is the underlying issue with all of the major areas of political disagreement: education, health care, human rights, climate change, economic regulations, foreign policies, etc.

Effectively the modern state insists that citizens must accept that their world operates in ways that they cannot directly experience and can never fully understand. Even the most fully educated person in the world can only have understanding in a very narrow slice of the world and only then through ongoing participation in a complex and extended system of human and nonhuman partners. Even with this, the knowledge we produce is never fully “true;” it is only the best construction that we can manage. It’s a construction over which experts disagree and which is continually revised and refined. This comparatively fragile and carefully wrought expert knowledge then butts up against the felt, but also reasoned, sense of reality as it is directly experienced by citizens, both individually and in small communities (families, friends, co-workers, etc.).

So on the one hand you have dozens of people from a variety of intelligence agencies reviewing hundreds of reports and thousands of data points to determine the likelihood of an immigration ban based on nationality being an effective deterrent against terrorism in the US. You end up with lots of conversations and data, and conclusions that are carefully parsed and reasoned. But even though the conclusion may be straightforward (i.e., this won’t work), working through the reasoning is hard if not impossible if you aren’t an expert.  On the other hand, you have citizens and their friends who feel threatened, whose direct experience with Arabs is quite limited if non-existent, and have a logical argument (albeit one that is based on misinformation and predicates). If you want to compare it to some technological arguments. People might feel that seat belts in cars or motorcycle helmets are unnecessary or that owning a gun makes them safer. These are equally examples of how people have an illusion of their depth of knowledge, believing they know how these mundane technologies function (and their dangers) when they do not.

None of that answers the question of how to change people’s minds. Obviously it isn’t easy.  But if you realize that people gain confidence of their worldviews through networks of humans or nonhumans then shifting that confidence probably means altering those networks and their strength.  One might say that the Trump administration is seeking to weaken some networks supported by mainstream media. Of course that’s not very subtle and probably only serves to strengthen the faith of his opponents in those networks. Different exertions of political power might work. If you’re not the president however you will need a different strategy.

 


Viewing all articles
Browse latest Browse all 32

Trending Articles