X

This website uses cookies.

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Cookie Policy.

I agree
Learn More
Great Geek Gifts in the Technabob Shop!Get Technabob Daily: Join our Mailing List! | Follow Us: Facebook | Twitter
Awesomer Media Sites: THE AWESOMER | MIGHTYMEGA | 95OCTANE
subscribe to our rss feedsubscribe via e-mailfollow technabob on twittertechnabob facebook fan pageGoogle+follow us in feedly
Follow Us:

Ghost in the Shell: Scientists Working on Programming Morality Into Machines

by Lambert Varias
Advertisement

Earlier this year, Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universitas Indonesia published a paper describing what they think is a stepping stone to having artificial intelligence that can analyze moral dilemmas and evaluate the consequences of the resolutions to the dilemma. In other words, Pereira and Saptawijaya claim that there is a way of reducing ethics into mathematics. Should we call this new field Mathemethics? Mathethics? Meth?

Portal-GLaDOS

As proof, the two researchers – who are both “interested in artificial intelligence and the application of computational logic” – say that they have successfully created a computer system capable of making human-like moral judgments on the trolley problem, which involves five people about to be hit by an out-of-control trolley. A bystander is near a switch that, when flipped, can change the course of the trolley. This will save the five people, but will hit the one person who is on the alternate track. Most people will of course sacrifice the one person to save the five.

Maybe I’m just being an idiot, but isn’t the trolley problem too simple to be used as proof of a morally aware computer? There are a lot of problems in the real world that are much more complex, and cannot be solved simply by applying Spock’s dying words. And it’ll still be humans who’ll program this so-called prospective logic, in which case their biases will transfer over to the programmed computer or robot. Behind every Wall E is a programmer with a sense of humor, and behind every Skynet is an evil mad scientist. If you’re so inclined, Technovelgy has a link to the pdf version of Pereira and Saptawijaya’s paper, Modelling Morality with Prospective Logic.

[via Technovelgy and AlphaGalileo via BotJunkie]




Comments are closed for posts older than 90 days.

Comments (2):

  1. LeanPocket says:

    That picture is GladOS from portal.

  2. Eric says:

    Yeah, those robots with ultra fast hands will really need a built-in morality module. If a human had robots “powers”, he would loose ethics in a blink.

    They would assume some kind of Asimov’s laws, as laws are much simpler and (maybe) secure than Math-Ethics.

Recent Posts

This Strandbeest Inspired Bike Is a Slow Ride

This Strandbeest Inspired Bike Is a Slow Ride

Bose Wants Kids to Build Their Own Speakers

Bose Wants Kids to Build Their Own Speakers

Pizzeria Uses Robots to Prepare Pies, Bakes Them During Delivery

Pizzeria Uses Robots to Prepare Pies, Bakes Them During Delivery

Labyrinth Funko POP! Figures Won\

Labyrinth Funko POP! Figures Won't Steal Your Baby Brother

Ghostbusters Hamburgers: I Ain\

Ghostbusters Hamburgers: I Ain't 'Fraid of No Clogged Artery

More from Awesomer Media...

The Awesomer Logo MightyMega Logo 95octane Logo