Recommend
5 
 Thumb up
 Hide
153 Posts
1 , 2 , 3 , 4 , 5  Next »  [7] | 

BoardGameGeek» Forums » Everything Else » Religion, Sex, and Politics

Subject: Should your self-driving car kill you to save a school bus full of kids? rss

Your Tags: Add tags
Popular Tags: [View All]
Shawn Fox
United States
Richardson
Texas
flag msg tools
Question everything
Avatar
mbmbmbmbmb
http://finance.yahoo.com/news/self-driving-car-kill-save-101...

Since I know there are a lot of people on RSP who like such philosophical masturbation I thought I'd post a link to this article. My opinion is that objects owned by me should be designed to protect me rather than trying to produce the "optimal" outcome. Maybe that is because I'm a selfish prick and willing to admit it, but I have a huge problem with products that would be designed to act in a way that is not in the interest of their owner.
6 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Boaty McBoatface
England
County of Essex
flag msg tools
badge
Avatar
mbmbmbmbmb
I wonder if anyone would take this approach if it was their kids in the bus?

Of course the simplest answer is to say, no car should be driving fast enough for a simple tail clump to kill you, problem solved.

So no your needs should not take precedence over the lives of others, and if that means you can only go at 10mph fine.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
United States
Colorado
flag msg tools
Avatar
mbmbmbmbmb
https://boardgamegeek.com/thread/1387399/and-here-reason-why...
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Scott Seifert
United States
Little Canada
Minnesota
flag msg tools
badge
Avatar
mbmbmbmbmb
If a company sold you a car that deliberately killed you, then the company has committed murder.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Steven Woodcock
United States
Unspecified
Unspecified
flag msg tools
mbmbmbmbmb
sfox wrote:
http://finance.yahoo.com/news/self-driving-car-kill-save-101...

Since I know there are a lot of people on RSP who like such philosophical masturbation I thought I'd post a link to this article. My opinion is that objects owned by me should be designed to protect me rather than trying to produce the "optimal" outcome. Maybe that is because I'm a selfish prick and willing to admit it, but I have a huge problem with products that would be designed to act in a way that is not in the interest of their owner.


I generally agree -- the sole purpose of my tech is to protect me, not to protect some hypothetical others. If it's programmed to do anything else then that programming can be tampered with and altered to suit the whims of current politics, and that is not palatable to me.


Ferret
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Michael Carter
United States
Marion
Iowa
flag msg tools
Avatar
mbmbmbmbmb
slatersteven wrote:
I wonder if anyone would take this approach if it was their kids in the bus?

Of course the simplest answer is to say, no car should be driving fast enough for a simple tail clump to kill you, problem solved.

So no your needs should not take precedence over the lives of others, and if that means you can only go at 10mph fine.


What is a simple tail clump?
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Michael Carter
United States
Marion
Iowa
flag msg tools
Avatar
mbmbmbmbmb
Ferretman wrote:
sfox wrote:
http://finance.yahoo.com/news/self-driving-car-kill-save-101...

Since I know there are a lot of people on RSP who like such philosophical masturbation I thought I'd post a link to this article. My opinion is that objects owned by me should be designed to protect me rather than trying to produce the "optimal" outcome. Maybe that is because I'm a selfish prick and willing to admit it, but I have a huge problem with products that would be designed to act in a way that is not in the interest of their owner.


I generally agree -- the sole purpose of my tech is to protect me, not to protect some hypothetical others. If it's programmed to do anything else then that programming can be tampered with and altered to suit the whims of current politics, and that is not palatable to me.


Ferret


Programming that is designed to save you can also be tampered with.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Boaty McBoatface
England
County of Essex
flag msg tools
badge
Avatar
mbmbmbmbmb
mlcarter815 wrote:
slatersteven wrote:
I wonder if anyone would take this approach if it was their kids in the bus?

Of course the simplest answer is to say, no car should be driving fast enough for a simple tail clump to kill you, problem solved.

So no your needs should not take precedence over the lives of others, and if that means you can only go at 10mph fine.


What is a simple tail clump?
A car bumping into another car at very low speeds (you know like dodgems).
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
United States
Colorado
flag msg tools
Avatar
mbmbmbmbmb
CapNClassic wrote:
These aren't moral choices being made.

In the self-driving car example, the car is programmed to avoid accidents.
In the networked car example, the cars are programmed to avoid accidents.

In the second example, it couldn't happen. If the cars were networked, they would have avoided the accident in the first place. They would have anticipated the other cars actions or never drove at a speed in which they couldn't avoid the accident (they are in control of all the cars, so why would they be programmed to drive at speeds such that they couldn't avoid an accident)

Also, the writer imagines where the programming takes into account the people in the vehicles. Why would we do this? The purpose of a self-driving car is to get you from one place to another safely. We would've program the cars with extra information that it doesn't need to make decisions.

There scenario only makes sense if they are trying to program a machine to make moral decisions. Which is huge mistake.


Catastrophic failure is a thing you know.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Scott Russell
United States
Clarkston
Michigan
flag msg tools
badge
Avatar
mbmbmbmbmb
CapNClassic wrote:

In the second example, it couldn't happen. If the cars were networked, they would have avoided the accident in the first place.


And, if the controller didn't anticipate well enough to avoid the accident, then I really don't want it deciding to put my car in the wall. Obviously, it's not foolproof programming.
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Clay
United States
Alabama
flag msg tools
Avatar
mbmbmbmbmb
Yes.

Of course, how would it even get to that point? If all of the vehicles on the road "know" what each other vehicle is trying to do then how would collisions like this even arise? Even if assume they can only calculate so far ahead that should still be much further in advance that the reasonable ability of any human and thus should be more than enough time for the cluster of cars on that section of road to all get the "hey, we should all slow the fuck down like right now" flag and avoid hitting each other. If you're telling me that the technology would be advanced enough to calculate number of casualties on the fly but yet somehow not advanced enough to manage coordinated decreases in speed then I'm going to say your technology isn't ready to be implemented on a national scale and that your programmers need to get their priorities in order. The braking systems are probably easier to figure out than the death optimization systems anyway so it's not even a success for the utilitarian to get the latter done first, the former should both save more lives and be more efficient to create.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Boaty McBoatface
England
County of Essex
flag msg tools
badge
Avatar
mbmbmbmbmb
The Message wrote:
Yes.

Of course, how would it even get to that point? If all of the vehicles on the road "know" what each other vehicle is trying to do then how would collisions like this even arise? Even if assume they can only calculate so far ahead that should still be much further in advance that the reasonable ability of any human and thus should be more than enough time for the cluster of cars on that section of road to all get the "hey, we should all slow the fuck down like right now" flag and avoid hitting each other. If you're telling me that the technology would be advanced enough to calculate number of casualties on the fly but yet somehow not advanced enough to manage coordinated decreases in speed then I'm going to say your technology isn't ready to be implemented on a national scale and that your programmers need to get their priorities in order. The braking systems are probably easier to figure out than the death optimization systems anyway so it's not even a success for the utilitarian to get the latter done first, the former should both save more lives and be more efficient to create.
Why would such cars not be programmed to not "drive over the safe stopping distance"? Tailgating is a human mistake, a machine should be programmed not to do it.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Steven Woodcock
United States
Unspecified
Unspecified
flag msg tools
mbmbmbmbmb
mlcarter815 wrote:


Programming that is designed to save you can also be tampered with.


Totally true.

I'd like to think I could electronically lock out stuff like this, but the hard fact is that you're always vulnerable especially if there's any kind of WiFi.


Ferret
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
United States
Colorado
flag msg tools
Avatar
mbmbmbmbmb
The Message wrote:
Yes.

Of course, how would it even get to that point? If all of the vehicles on the road "know" what each other vehicle is trying to do then how would collisions like this even arise? Even if assume they can only calculate so far ahead that should still be much further in advance that the reasonable ability of any human and thus should be more than enough time for the cluster of cars on that section of road to all get the "hey, we should all slow the fuck down like right now" flag and avoid hitting each other. If you're telling me that the technology would be advanced enough to calculate number of casualties on the fly but yet somehow not advanced enough to manage coordinated decreases in speed then I'm going to say your technology isn't ready to be implemented on a national scale and that your programmers need to get their priorities in order. The braking systems are probably easier to figure out than the death optimization systems anyway so it's not even a success for the utilitarian to get the latter done first, the former should both save more lives and be more efficient to create.


How could they arise?
CATASTROPHIC FAILURE
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
John Hathorn
United States
San Antonio
Texas
flag msg tools
badge
Avatar
mbmbmbmbmb
golden_cow2 wrote:
If a company sold you a car that deliberately killed you, then the company has committed murder.

If a company designed a car that killed 20 children on a bus then the company has committed 20 counts of murder. With that calculation involved by the corporation, I can safely say it was nice knowing you and sfox.

My AutoBot wouldn't drive recklessly enough for this to be an issue.
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Boaty McBoatface
England
County of Essex
flag msg tools
badge
Avatar
mbmbmbmbmb
So let me get this straight.

If something has a potential to be misused to cause harm it should not be allowed?
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Isaac Citrom
Canada
Montreal
Quebec
flag msg tools
badge
Avatar
mbmbmbmbmb
CapNClassic wrote:
These aren't moral choices being made.

In the self-driving car example, the car is programmed to avoid accidents.
In the networked car example, the cars are programmed to avoid accidents.

In the second example, it couldn't happen. If the cars were networked, they would have avoided the accident in the first place. They would have anticipated the other cars actions or never drove at a speed in which they couldn't avoid the accident (they are in control of all the cars, so why would they be programmed to drive at speeds such that they couldn't avoid an accident)

Also, the writer imagines where the programming takes into account the people in the vehicles. Why would we do this? The purpose of a self-driving car is to get you from one place to another safely. We would've program the cars with extra information that it doesn't need to make decisions.

There scenario only makes sense if they are trying to program a machine to make moral decisions. Which is huge mistake.


Michael, there is indeed a moral aspect to this. To be sure, the automated vehicle will do a lot better at avoiding collisions. Even so, it is making an unwarranted assumption to believe that even an automated system will never be confronted with a situation it can't handle, or more to the point, put in a situation where the system has to make a choice between the lesser of two bad outcomes.

As our software intensive systems become ever more capable, we will be forced to confront our ideas of morality because we will have to make choices that are to be encoded into the system.

For example, the hypothetical smart car cannot avoid a collision. It must decide between swerving left into a man or swerving right to collide with a tree, killing the car's own occupant.

We will be forced to confront, in vey real terms, the reality of our moral musings. I, as an individual, may very well decide to sacrifice my own life for the sake of another. Do I want to empower a machine to make that choice on my behalf? Are we to be forced into buying products that make moral decisions that are decided by others? What will those moral conclusions be?

Does the vehicle swerve into the white man or the black boy? Towards the old man or the child? The mother and stroller on the sidewalk or the 3 black guys fooling around on the road? Hit the dog and avoid car damage or hit the bench and ruin your car? Is it to be a random choice?

Even well before true AI entities, our automated systems will force us to confront our ideologies in order to encode them. It is unavoidable.
.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Steven Woodcock
United States
Unspecified
Unspecified
flag msg tools
mbmbmbmbmb
slatersteven wrote:
So let me get this straight.

If something has a potential to be misused to cause harm it should not be allowed?


That broadly seems to be the opinion of many BGGers when it comes to firearms......



Ferret
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Xander Fulton
United States
Astoria
Oregon
flag msg tools
designer
badge
Avatar
mbmbmbmbmb
CapNClassic wrote:
Second time with the "catastrophic failure" narrative. You know what computers don't do during catastrophic failures? Make complex calculations that involve life and death scenarios. Instead, what they do is shutdown, restart, etc. in the case of a car, it would likely brake and reduce its speed to zero. What sort of world are you imagining where a computer has a catastrophic failure, and it is still allowed to make decisions as if it was operating normally?

This scenario cannot happen, you don't program computers to make these sorts of decisions. If the computer has catastrophically failed, you attempt to get into a known good state, you don't allow it to keep on operating and making decisions.


Careful, now - computers are still a pretty scary and new concept for Dashi, he isn't fully familiar with how they work.

Why, one time he saw a TV show where a computer was walking around, and it had arms and legs even, and it was stealing people's women!
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Boaty McBoatface
England
County of Essex
flag msg tools
badge
Avatar
mbmbmbmbmb
Ferretman wrote:
slatersteven wrote:
So let me get this straight.

If something has a potential to be misused to cause harm it should not be allowed?


That broadly seems to be the opinion of many BGGers when it comes to firearms......



Ferret
I know of no BBG who has ever said there should be a blanket ban on firearms. Perhaps you would care to provide a link?
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Boaty McBoatface
England
County of Essex
flag msg tools
badge
Avatar
mbmbmbmbmb
XanderF wrote:
CapNClassic wrote:
Second time with the "catastrophic failure" narrative. You know what computers don't do during catastrophic failures? Make complex calculations that involve life and death scenarios. Instead, what they do is shutdown, restart, etc. in the case of a car, it would likely brake and reduce its speed to zero. What sort of world are you imagining where a computer has a catastrophic failure, and it is still allowed to make decisions as if it was operating normally?

This scenario cannot happen, you don't program computers to make these sorts of decisions. If the computer has catastrophically failed, you attempt to get into a known good state, you don't allow it to keep on operating and making decisions.


Careful, now - computers are still a pretty scary and new concept for Dashi, he isn't fully familiar with how they work.

Why, one time he saw a TV show where a computer was walking around, and it had arms and legs even, and it was stealing people's women!
Fornicate! fornicate!
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
casey r lowe
United States
butte
Montana
flag msg tools
mb
Re: Should your self-driving car kill you to save a school bus full of kids?
most definitely
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Eric "Shippy McShipperson" Mowrer
United States
Vancouver
Washington
flag msg tools
badge
Ami. Geek.
Avatar
mbmbmbmbmb
sfox wrote:
http://finance.yahoo.com/news/self-driving-car-kill-save-101...

Since I know there are a lot of people on RSP who like such philosophical masturbation I thought I'd post a link to this article. My opinion is that objects owned by me should be designed to protect me rather than trying to produce the "optimal" outcome. Maybe that is because I'm a selfish prick and willing to admit it, but I have a huge problem with products that would be designed to act in a way that is not in the interest of their owner.


Hasn't anyone seen I, Robot? You don't program computers to make moral decisions for this very reason. It creates a huge dilemma.

Avoid an accident, if possible. Crash in the safest way possible for the occupants of the vehicle if not. It would be foolish to program the computers to decide who should live and die situationally.
3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
casey r lowe
United States
butte
Montana
flag msg tools
mb
ejmowrer wrote:
You don't program computers to make moral decisions for this very reason.

humans dont make moral decisions either - what differentiates "human intelligence" from "artificial intelligence" is a matter of scale/complexity
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Eric "Shippy McShipperson" Mowrer
United States
Vancouver
Washington
flag msg tools
badge
Ami. Geek.
Avatar
mbmbmbmbmb
single sentences wrote:
ejmowrer wrote:
You don't program computers to make moral decisions for this very reason.

humans dont make moral decisions either - what differentiates "human intelligence" from "artificial intelligence" is a matter of scale/complexity


Well, not in that situation, no. But computers are theoretically capable of it at some point. But just because they can doesn't mean they should.
 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
1 , 2 , 3 , 4 , 5  Next »  [7] | 
Front Page | Welcome | Contact | Privacy Policy | Terms of Service | Advertise | Support BGG | Feeds RSS
Geekdo, BoardGameGeek, the Geekdo logo, and the BoardGameGeek logo are trademarks of BoardGameGeek, LLC.