Law in Contemporary Society

View   r9  >  r8  >  r7  >  r6  >  r5  >  r4  ...
TheFirstLawOfRobotics 9 - 09 Jul 2012 - Main.KatherineMackey
Line: 1 to 1
 Today I came across a Forbes article based on a brief phone interview with Eben. The focus was on internet security, specifically in the context of mobile technology. As a huge fan of Asimov, I found it particularly interesting because of Eben’s reference to the First Law of Robotics, and how science fiction has generally predicted the interaction between humans and robots.

The First Law of Robotics states that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” According to Eben, what our modern day “robots” – our smartphones – do to us on a daily basis is exactly the opposite, and he lists a variety of ways in which this is done.

Line: 43 to 43
 Our cell phones can't move, can't talk, and can't make decisions by themselves. Thus any danger from them is must be generated by ourselves, unlike the concern in the Asimov universe. Even more concerning, unlike the situation where one person purposively programs a robot to harm another, a man vs. man situation, no one has programmed our cell phones to intentionally harm us. Rather we allow them to through our own inaction. We hare ignorantly harming ourselves, a step removed from another person harming us through a robot (which is never really discussed by Asimov) and a second step removed from robots autonomously harming us. So no, I agree with Harry that Eben's mention of the three laws isn't really relevant to our situation as they are treated in Asimov's stories. But what resonated with me was precisely this disconnect and why it exists. Unlike the Asimov universe, we don't need to three laws to protect us from robots, rather we need them to protect us from ourselves.

-- AlexWang - 09 Jul 2012

Added:
>
>
What about the way they have changed our brains? Eben's article didn't mention this, but we discussed it in class a little bit. I love my iPhone, but since I've had it I've noticed that my attention span has decreased significantly. When I'm waiting somewhere, instead of pulling out a book to read, I pull out my phone, check my email, play Words with Friends, check Facebook, and check the news. I check my email ALL THE TIME but I have no reason to do that. I wish CLS would install cellphone signal blockers in the classrooms, because if I have my phone, I'm going to check it. I recognize that this kind of behavior is bad for me, but I can't stop.

-- KatherineMackey - 08 Jul 2012


TheFirstLawOfRobotics 8 - 09 Jul 2012 - Main.AlexWang
Line: 1 to 1
 Today I came across a Forbes article based on a brief phone interview with Eben. The focus was on internet security, specifically in the context of mobile technology. As a huge fan of Asimov, I found it particularly interesting because of Eben’s reference to the First Law of Robotics, and how science fiction has generally predicted the interaction between humans and robots.

The First Law of Robotics states that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” According to Eben, what our modern day “robots” – our smartphones – do to us on a daily basis is exactly the opposite, and he lists a variety of ways in which this is done.

Line: 35 to 35
 I think we're in agreement on all substantive levels, but I don't agree that this has "nothing to do with" the Laws. The First Law, as a principle, can be a guiding principle that developers have in mind when they design devices and software, independently of whether or not it can be directly programmed into those things as used in Asimov's stories.

-- MarcLegrand 28 Jun 2012

Added:
>
>
I think the most interesting part of Moglen's reference to the three laws is exactly that it serves to point out the disconnect between the robots in Asimov's stories and our cell phones. I agree with Harry that the Laws as Asimov conceived that can't really be programmed into our cell phones. And of course, as both Harry and Marc agree, the underlying message is the freestanding policy choice question of how to most responsibly use technology (a theme that clearly underlies the Three Laws, but given the scope we're discussing here, is probably most similar to the Zeroth Law introduced later). But I think Moglen's reference highlights a perverse difference between the problem we're facing and the problems that implicate the laws in Asimov's stories, namely our own complicity in creating the problem.

Our cell phones are nowhere close to Asimov's self-sustaining, mobile, and importantly autonomously thinking robots. As a result, the root of possible injuries to human beings are completely different in our world than in Asimov's. In the Asimov universe, because robots are autonomous, the concern is always a robot choosing to harm a human being, thus the Three Laws were created. In only one story, "Cal," does a robot actually choose to kill a human being in violation of the first law. Other injuries to humans are do to damages to the positronic brain ("Robot Dreams") and possible ambiguities of who is a human being. Importantly though, even though Asimov contemplates people using robots to injure other people (implied in the second law), all violations of the laws that could injure humans are accomplished through the robot itself. There is never a story where a person engineers a robot to harm another person. Asimov's robot universe is entirely on robot vs. human and the concomitant complexities. It is not about human vs. human with the use of robots. But this is precisely the situation that we potentially have.

Our cell phones can't move, can't talk, and can't make decisions by themselves. Thus any danger from them is must be generated by ourselves, unlike the concern in the Asimov universe. Even more concerning, unlike the situation where one person purposively programs a robot to harm another, a man vs. man situation, no one has programmed our cell phones to intentionally harm us. Rather we allow them to through our own inaction. We hare ignorantly harming ourselves, a step removed from another person harming us through a robot (which is never really discussed by Asimov) and a second step removed from robots autonomously harming us. So no, I agree with Harry that Eben's mention of the three laws isn't really relevant to our situation as they are treated in Asimov's stories. But what resonated with me was precisely this disconnect and why it exists. Unlike the Asimov universe, we don't need to three laws to protect us from robots, rather we need them to protect us from ourselves.

-- AlexWang - 09 Jul 2012


TheFirstLawOfRobotics 7 - 29 Jun 2012 - Main.MarcLegrand
Line: 1 to 1
 Today I came across a Forbes article based on a brief phone interview with Eben. The focus was on internet security, specifically in the context of mobile technology. As a huge fan of Asimov, I found it particularly interesting because of Eben’s reference to the First Law of Robotics, and how science fiction has generally predicted the interaction between humans and robots.

The First Law of Robotics states that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” According to Eben, what our modern day “robots” – our smartphones – do to us on a daily basis is exactly the opposite, and he lists a variety of ways in which this is done.

Line: 31 to 31
 I agree with your suggestion. But it has nothing to do with the Laws of Robotics. It is a freestanding policy choice, divorced from Asimov's laws.

-- HarryKhanna 28 Jun 2012

Added:
>
>
I think we're in agreement on all substantive levels, but I don't agree that this has "nothing to do with" the Laws. The First Law, as a principle, can be a guiding principle that developers have in mind when they design devices and software, independently of whether or not it can be directly programmed into those things as used in Asimov's stories.

-- MarcLegrand 28 Jun 2012


TheFirstLawOfRobotics 6 - 28 Jun 2012 - Main.HarryKhanna
Line: 1 to 1
 Today I came across a Forbes article based on a brief phone interview with Eben. The focus was on internet security, specifically in the context of mobile technology. As a huge fan of Asimov, I found it particularly interesting because of Eben’s reference to the First Law of Robotics, and how science fiction has generally predicted the interaction between humans and robots.

The First Law of Robotics states that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” According to Eben, what our modern day “robots” – our smartphones – do to us on a daily basis is exactly the opposite, and he lists a variety of ways in which this is done.

Line: 27 to 27
 You're certainly right on one level, but it seems to me that you're pointing out a somewhat illusory problem. Even if we can't program our smartphones to use their judgment to not harm us, we could certainly work on programming communications and networking platforms that don't share our information with anyone who has the money to buy it.

-- MarcLegrand 28 Jun 2012

Added:
>
>
I agree with your suggestion. But it has nothing to do with the Laws of Robotics. It is a freestanding policy choice, divorced from Asimov's laws.

-- HarryKhanna 28 Jun 2012


TheFirstLawOfRobotics 5 - 28 Jun 2012 - Main.MarcLegrand
Line: 1 to 1
 Today I came across a Forbes article based on a brief phone interview with Eben. The focus was on internet security, specifically in the context of mobile technology. As a huge fan of Asimov, I found it particularly interesting because of Eben’s reference to the First Law of Robotics, and how science fiction has generally predicted the interaction between humans and robots.

The First Law of Robotics states that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” According to Eben, what our modern day “robots” – our smartphones – do to us on a daily basis is exactly the opposite, and he lists a variety of ways in which this is done.

Line: 23 to 23
 Since interpretation of the law is fact-sensitive and relies on human emotion and experience, I can't see how the laws of robotics could ever be programmed into a robot, or any device for that matter. At least until we build our devices to exhibit judgment and emotion... but that's probably not going to happen for the foreseeable future.

-- HarryKhanna 28 Jun 2012

Added:
>
>
You're certainly right on one level, but it seems to me that you're pointing out a somewhat illusory problem. Even if we can't program our smartphones to use their judgment to not harm us, we could certainly work on programming communications and networking platforms that don't share our information with anyone who has the money to buy it.

-- MarcLegrand 28 Jun 2012


Revision 9r9 - 09 Jul 2012 - 01:47:06 - KatherineMackey
Revision 8r8 - 09 Jul 2012 - 01:01:01 - AlexWang
Revision 7r7 - 29 Jun 2012 - 01:27:38 - MarcLegrand
Revision 6r6 - 28 Jun 2012 - 21:15:29 - HarryKhanna
Revision 5r5 - 28 Jun 2012 - 20:10:24 - MarcLegrand
Revision 4r4 - 28 Jun 2012 - 19:05:31 - HarryKhanna
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM