?

Log in

No account? Create an account

Don't Eat With Your Mouth Full

Where can we live but days?

tree_face
steepholm steepholm
Previous Entry Share Next Entry
Your Passengers Must Die
The trolley problem is not just a thought experiment - it's a practical issue, at least for the AI programmers charged with teaching self-driving cars whom to spare and whom to kill in ticklish traffic conditions. That, at least, is the premise of MIT's Moral Machine project.

The programmers are of course aware that there is more than one conception of what constitutes a moral decision, so they're crowd-sourcing their morality in the hope of creating different driving strategies according to the cultural priorities of different countries. If you click on the link above, you can add to their database.

Anyway, I was very interested to read this article, which crunches some of their data to show how priorities differ in different countries. For example, should the self-driving car choose to run over young people or old people?
spare the young

Far-eastern countries, perhaps under Confucian influence, are much more careful of the lives of the elderly, whereas the West is in general keener to preserve the lives of the young - perhaps on the individualistic principle that older people have already "had their turn". The data doesn't give us explanations, but such graphs are of course an open invitation to draw on national stereotypes.

What about the importance of sparing more lives rather than fewer?

spare more lives

Again, there's largely an East-West split, with Westerners perhaps performing a kind of utilitarian calculus whereby three lives are worth three times as much as one. This doesn't mean of course that Japanese drivers will recklessly swerve into crowds, merely that they place less emphasis on numbers.

The one that interested me most was this one, concerned with whether one should spare pedestrians or passengers:

spare pedestrians

Here, suddenly, China and Japan are at opposite extremes, and how! Chinese drivers see random pedestrians as far more expendable than the friends, family or colleagues who are presumably their passengers. (Note to self: look both ways on the streets of Shanghai.) Japan, by contrast, sacrifices passengers to pedestrians to a very marked extent.

The obvious explanation, it seems to me, lies in the uchi-soto (inside-outside) principle, which demands that outsiders be treated with preferential politeness and consideration. The nature of the in-group depends on context: it could be one's family, one's school, one's company. When referring to members of one's in-group to an outsider, you always use humble language; when referring to outsiders, you always use polite language. For example, if I want to mention my son, I say "musuko", but your son would be "musuko-san". If I'm the humblest employee at Sony, then I will refer to my CEO as Yoshida, without any honorific, when speaking to people outside the company. (Inside the company, it would be a very different matter.)

Perhaps, for the purpose of the MIT experiment, passengers are regarded as "uchi", and pedestrians as "soto"? That's just a top-of-the-head theory, but I find it plausible.

I find the trolley problem to be such a completely artificial setup that I can't even contemplate the question. It goes in the rubbish bin along with the ticking-bomb terrorist problem and all such other nonsense.

The trolley problem is interesting as an exercise in narrative presentation. Why is it okay (as it is for most Americans, anyhow) to change the points sending the trolley towards one person, saving six, but not okay (for most Americans, anyhow) to shoot someone so that he falls onto the lever which will change the points and send a trolley down a track harmlessly, saving the six it would otherwise hit? Same question under the hood, but vastly different intuitions as to what to do.

I don't care. The whole setup is so artificial that I can't answer any of those questions. Insert long rant on the subject here.

It's certainly artificial in the sense of being invented, and perhaps contrived, but isn't that par for the course in philosophical thought experiments? And not uncommonly in film and fiction too, where I've several times witnessed people having to choose between saving a loved one or betraying their country, for example? I'm also wondering whether you have less objection to thought experiments, however unlikely, in other domains - such as the special relativity one about one twin taking a long space voyage while another stays home and ages more quickly.

I suppose what I'm driving at is that, if you have time, I'd like to read your rant!

If that's par for the course in philosophical thought experiments, it's why almost all such questions I've seen strike me as completely artificial.

Then you say, don't people face difficult dilemmas all the time? Sure they do. What makes this artificial is not the idea of a dilemma, but the artificial setup intended to force the dilemma. The parameters created for the experiment are completely alien to the decisions you'd have to make in real life, mostly to close off more sensible options. The result is that you can't apply the common sense you'd use in a real situation.

This has nothing to do with the concept of thought experiments per se. It's about artificial setups intended to force artificial decisions to solve artificial dilemmas. In the special relativity one, for instance, nobody's being asked to solve an impossible dilemma. If the purpose of the special relativity one were to ask "which twin would you rather be?", then it would be a stupid artificial setup.

Okay, suppose the person you have to shoot is someone you recognize as a philosophy professor who teaches these phony situations as "moral dilemmas" rather than straw-man arguments specifically designed to close off imagination....

Edited at 2018-12-16 11:43 pm (UTC)

In my view the justification for this kind of thought experiment is really that it forces you to consider the bases of your real-life judgements more carefully. There's a high degree of artifice, certainly, but it's not offering itself as a realistic scenario or even an engaging story, just trying to isolate certain aspects of a situation for inspection. That said, I can see how that sort of method would be anathema to someone for whom moral situations have be treated holistically or not at all.

Of course, as I said at the top of the post, for the MIT AI engineers it's not a thought experiment at all, but a practical coding problem, and one with potential legal consequences. If a court holds a car's driving algorithm responsible for its actions, then the company that produced the code will need to show that it acted responsibly, indeed morally.

I emphatically disagree, because all problems of this sort are artificially designed to remove factors essential to the decision-making process. Or, in most examples, merely to refuse to inform the person taking the test what those factors are.

The most important of these factors is the degree of certainty one has in the data received. And that is what makes the AI coding problem just as artificial as the thought-experiments. How are you going to code the AI to act in specifically narrow and unlikely circumstances? How will you be sure its sensors will read the data correctly? Can it distinguish between people who can and can't get off the track? In the case of choosing to run over young vs. older people, how is it going to tell them apart, and can it do so reliably? Etc etc bloody etc.

I emphatically disagree, because all problems of this sort are artificially designed to remove factors essential to the decision-making process. Or, in most examples, merely to refuse to inform the person taking the test what those factors are.

Fair enough. As I said, the rationale I offered above won't apply if you take a holistic approach. That approach incorporates the series of questions in your second paragraph as part of the moral dilemma.

But it's equally possible to see them as non-moral questions in themselves, as an information gathering exercise necessary to, but not part of, moral reasoning. The car maker will presumably have other people working on those functions- sensors, face recognition software, etc., which will then send information (probably with a confidence score attached - no system is perfect) to the algorithm making the "moral" decision.

I don't think it's useful or meaningful information when collected outside of a broader context. People who like to collect data tend to value it as useful in itself, without considering what it's being used for (your presumptions about the algorithm strike me as shaky) or whether the information itself has any actual value or is just numerical garbage.

I'm struck by this every time I'm asked to rate something or take a political poll. My answer just doesn't fit within the rigid simplistic categories offered, and as a result any answer I give that they'll accept will be garbage. If, in consequence, the entire poll is garbage, that could explain, even more than sampling problems etc, why polls in practice are so bad in explaining what's actually going on.

Just fascinating.

I see this as one more reason for having all my stuff delivered to the house so I don't have to go outside.

Hmm.

Is Amazon funding this study?

On the other hand, self-driving cars stay well below the alcohol limit, so maybe it evens out.

I dunno about you, but where I live they put alcohol in the gasoline....

That's fascinating! Thank you.
Re: the people who are at the looking-after-passengers end of thing (as opposed to looking-after-strangers) - I'm thinking it's maybe not so much looking after your own (family, friends) as imperative duty of care of the guest (ie passenger = guest in your car)?

The part about not needing an honorific for your own company's head is pretty interesting, too. Is it only in companies as families? How would other quasi-families go - say, football teams and captains/coaches, or criminal gangs?

That's a good thought about passengers being in the position of guests! It would certainly make sense in the context of many cultures, although I don't know enough about China to say if theirs is one of them.

All the examples you mention would count as "uchi" groups, I'm pretty sure. To use an honorific in connection with a group with which you are yourself associated is seen as self-praise, and hence no go.