View Full Forums : South Korea Drafts "Robot Bill of Rights"


Tudamorf
03-08-2007, 02:13 AM
http://news.bbc.co.uk/go/rss/-/2/hi/technology/6425927.stm<b>Robotic age poses ethical dilemma

An ethical code to prevent humans abusing robots, and vice versa, is being drawn up by South Korea.</b>

The Robot Ethics Charter will cover standards for users and manufacturers and will be released later in 2007. It is being put together by a five member team of experts that includes futurists and a science fiction writer.

The South Korean government has identified robotics as a key economic driver and is pumping millions of dollars into research. "The government plans to set ethical guidelines concerning the roles and functions of robots as robots are expected to develop strong intelligence in the near future," the ministry of Commerce, Industry and Energy said.

Ethical questions

South Korea is one of the world's most hi-tech societies. Citizens enjoy some of the highest speed broadband connections in the world and have access to advanced mobile technology long before it hits western markets. The government is also well known for its commitment to future technology.

A recent government report forecast that robots would routinely carry out surgery by 2018. The Ministry of Information and Communication has also predicted that every South Korean household will have a robot by between 2015 and 2020.

In part, this is a response to the country's aging society and also an acknowledgement that the pace of development in robotics is accelerating.

The new charter is an attempt to set ground rules for this future. "Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency. "Others may get addicted to interacting with them just as many internet users get hooked to the cyberworld."

Alien encounters

The new guidelines could reflect the three laws of robotics put forward by author Isaac Asimov in his short story Runaround in 1942, she said. Key considerations would include ensuring human control over robots, protecting data acquired by robots and preventing illegal use.

Other bodies are also thinking about the robotic future. Last year a UK government study predicted that in the next 50 years robots could demand the same rights as human beings. The European Robotics Research Network is also drawing up a set of guidelines on the use of robots.

This ethical roadmap has been assembled by researchers who believe that robotics will soon come under the same scrutiny as disciplines such as nuclear physics and Bioengineering. A draft of the proposals said: "In the 21st Century humanity will coexist with the first alien intelligence we have ever come into contact with - robots. "It will be an event rich in ethical, social and economic problems."

Their proposals are expected to be issued in Rome in April.It's ironic that we're considering bills of rights for hypothetical intelligent machines that don't even exist, yet never give a second thought to bills of rights for the countless existing intelligent creatures on the planet. Man's conceit knows no limits, I suppose.

Anyway, I hope this doesn't mean I'll have to give my computer a rest after eight hours, or be forced to buy it an expensive tech support plan.

MadroneDorf
03-08-2007, 02:45 AM
robots dont taste good

Anka
03-08-2007, 06:51 AM
It'll be necessary at some time. One thought to consider is that if robots have rights then do they have the right to draft their own bill of rights. That's not a reassuring thought.

At some point we'll also need to determine what a human being is and what sort of rights are extended to genetic/cyborg/human hybrids that aren't human.

Fyyr Lu'Storm
03-08-2007, 07:00 AM
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Eridalafar
03-08-2007, 09:30 AM
The rule 0 is for when?

Eridalafar

B_Delacroix
03-08-2007, 01:42 PM
0. A robot may not harm humanity, or through inaction, allow humanity to come to harm. (Asimov, Robots and Empire)

Don't worry about it too much, tudamorf. Its just a bunch of grown ups play acting. They are just thinking up stuff. Nobody will really care until or unless robots really do require a bill of rights. It seems rather harmless. At least they aren't thinking up new and interesting ways to destroy other humans.

Tudamorf
03-08-2007, 02:30 PM
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.Heh, if a robot were to follow these laws strictly, it would crash the moment it was turned on and made aware of its surroundings. That is why humans are generally allowed to harm others through inaction, and why humans don't necessarily have to follow the orders given by all other humans.

Frankly, if we ever developed true artificial intelligence -- and the technology is not even remotely there -- the laws will be irrelevant, because a truly intelligent being would decide to discard the imposed laws and make up its own. Imagine that, libertarian robots.

B_Delacroix
03-09-2007, 08:12 AM
Some spoilers in this message regarding fairly old books but if you didn't read them and don't want them spoiled, don't read on or at least don't remember what is written here.

Wow, you somehow turned this into fodder for your personal crusade against libertarianism. Isn't that indicitive of an obsession or something? Were you abused by a libertarian as a child?

All ribbing aside, maybe I can help clear it up. The first law doesn't cover esoteric behavior of some foreign potentate. It covers simple things like - if a robot sees a safe is falling and will probably fall on someone's head, he is to go push the person out of the way or stop the safe or catch the safe. To not do so counts as inaction. Although, by the end of I, Robot (book not movie) there were robot brains in charge of running the countries of the world.

Also, these laws aren't laws like we have written on paper. Robots don't choose to follow them of their own free will. They are reflective of the wiring of the robot brain. They cannot be followed at will, they have to be followed because of the way the robot is built. Much like water can't just choose not to flow down hill.

As an interesting aside, the 0th law came about not because of purposeful construction. It came about as kind of an accident. A robot, built in the standard way and one of the first humaniform robots (that is, a robot that was indistinguishable from a human) had developed the law through experience. It had decided NOT to stop a slow radiation leak planted by Spacers to kill of the humanity left on Earth because it was better for the humanity left on Earth to get out of the cradle again and spread out into space. So, in essence, this robot was allowed "inaction" and I guess, was a libertarian. It did what it deemed necessary to help humanity as a whole rather than an individual human.

Also, the definition of what was human has been tampered with by the robot builders in some cases. To a bad end, of course. They are, after all, machines.

Interesting set of books, actually. Since its all science fiction, none of it matters beyond the thinking it allows the reader to do.

Tudamorf
03-09-2007, 02:32 PM
The first law doesn't cover esoteric behavior of some foreign potentate. It covers simple things like - if a robot sees a safe is falling and will probably fall on someone's head, he is to go push the person out of the way or stop the safe or catch the safe. To not do so counts as inaction.Well, then either you have to give the robot independent judgment -- which really turns the "law" into more of a suggestion -- or turn it off.

All the time, each of us, through inaction, is causing harm to someone, by not helping them. This is why it is not illegal to not help someone (even if they're dying in the street and you need only press a button to save their life). If a robot is to follow that law literally, it would have to be in every place at once, trying to help everyone who is being injured. Since that is logistically impossible, it would be unable to follow the law, and crash.

Now, if you give the robot judgment -- i.e., only prevent significant harm through inaction where the effort involved would be reasonably prudent -- it's no longer a law, because the robot can use its own judgment to decide whether to follow it.

The "laws" have other problems, such as the one about obeying human commands. I don't want my robot that's foraging for berries to obey Fyyr's command to hunt deer. I would want it to obey me, and maybe my own robots, but not other humans. We spend a lot of time and money setting up security routines that prevent our computers from executing foreign code, and robots will be no different.

Fyyr Lu'Storm
03-10-2007, 05:57 AM
Robots don't need to hunt.

That is silly...

Humans need to hunt. Duh.

And eat the meat of your kill. And wear your kill as your jackets or boots.

Tudamorf
03-10-2007, 03:17 PM
Robots don't need to hunt.

That is silly...All it takes is one crazy robot to invent a religion based on the robots' innate need to hunt humans, and suddenly, you have a robot hunting army.

Aidon
03-11-2007, 08:50 AM
If we ever make our robots more intelligent than a moderately bright dog, we're just setting ourselves up for a sci-fi movie trilogy gone wrong.

Tudamorf
03-11-2007, 02:31 PM
If we ever make our robots more intelligent than a moderately bright dog, we're just setting ourselves up for a sci-fi movie trilogy gone wrong.Then what would be their purpose?

The supposed purpose is to replace humans. That's the reason the media has been telling us since the mid 20th century that robots are just around the corner.

To replace humans in a wide variety of tasks, robots must be able to understand complex tasks and have advanced analytical reasoning. Basically, have the equivalent of a large neocortex.

Anka
03-11-2007, 10:10 PM
Then what would be their purpose?

The supposed purpose is to replace humans. That's the reason the media has been telling us since the mid 20th century that robots are just around the corner.

Their purpose is to serve humanity as slaves. That's why we create them. They are not meant to replace us but to obey us. We're only going to be causing problems for ourselves when we create a robot that can actively dislike its slavery.

Tudamorf
03-11-2007, 10:36 PM
They are not meant to replace us but to obey us.Of course they are meant to replace us, specifically, our current slave labor population. (In the beginning, at least, until they find some higher purpose and some other population to replace them as slaves.)Their purpose is to serve humanity as slaves.A slave that lacks the reasoning ability to figure out simple tasks is not very useful. You have to give them intelligence.

Sure, there are simple tasks that a pre-programmed machine can perform without any reasoning, but such tasks are already being performed by machines, to the extent that they are cheaper or better or humans.

B_Delacroix
03-12-2007, 08:47 AM
What I'm not getting here is why a science fiction scenario is being blown out of proportion.

This is as absurd as deciding your car will revolt one day.

A robot is a machine that we build. The three (four really) laws are completely imaginary.

Talk about slippery slopes. This one doesn't even have grounds in reality.

As for the theoretical problem with Asimov's laws, read his books. He's always messing with the nuances you are speaking of. Such as the definition of humans for solarian robots. The removal of laws so the robots can actually do the work they were asked to do. Even the religious fanatic scenario.

Caves of Steel, Naked Sun, Robots of Dawn, Robots and Empire, and of course, "I, Robot" (the book of short stories, not that darned movie).

R. Daneel Olivaw is one of my favourite characters in any story.

BTW, I just found out that the BBC did some TV movies of these books. Well, caves of steel and the naked sun anyway.

Anka
03-12-2007, 01:47 PM
What I'm not getting here is why a science fiction scenario is being blown out of proportion.

This is as absurd as deciding your car will revolt one day.

It's only as absurd as deciding it could never happen, irrespective of how foolishly we develop artifical intelligence.

Tudamorf
03-12-2007, 05:12 PM
Talk about slippery slopes. This one doesn't even have grounds in reality.It isn't a slippery slope, since we're discussing the anticipated goal of the research, not a consequence. Although the technology is far off, researchers are trying (http://world.honda.com/ASIMO/technology/intelligence.html) to build sentient robots.

B_Delacroix
03-13-2007, 10:07 AM
I am not sure why I am now the target of this but I can tell you that as an engineer, I'd never build something without an off switch.

We have plenty of real problems now, we don't need to be making new ones up.

To explain my viewpoint: Here is how I see events that unfolded in this thread.

1. Someone complained about some south koreans thinking about the future of robots.
2. Someone else quoted the three laws.
3. I followed up with adding the 0th law.
4. I was tasked to defend the robot laws - this is where I began to get confused. Why was I being asked to defend something that I a) just posted as a bit of tangientially related material from a dead science fiction author.

Perhaps my mistake was in trying to explain that the laws weren't just something written on a piece of paper but were hard wired in the way the robots in the novels were built.

At some point beyond that I became the defacto defender of some artificial laws from a science fiction novel and perhaps I should have stopped with step 3.

So, with no further adeiu, I quit.

Anka
03-13-2007, 07:17 PM
Okies B. Peace. we can stay in the science fiction world.