|
Pluto
Nov 19, 2015 4:57:34 GMT
Post by Wan X. Wu on Nov 19, 2015 4:57:34 GMT
North No.2 was very much like humans when it comes to having genuine feelings and emotions, but unlike humans North had more appreciation towards life, whether it's plants, robots and living or deceased, like Paul Duncan's mother. Human's thought of robots as a tool, either for war or to fulfill a certain purpose like a certain job or task, robots are generally dispensable to humans except for a special case like Mont Blanc. On the contrary North respected every robot's life, thus he's always haunted by his nightmare of killing fellow robots, he sees robots and humans as equal. North also respects Paul Duncan's work and tries to understand him and help him explore the missing piece in his music, thus he went all the way to his birthplace Bohemia to learn about the folk songs. Whereas humans only know of Paul's glory during his days when he was famous and only wanted to reuse his works.
Question: Given the fact that robots were programmed to not harm humans but Brau 1589 did and no malfunctions in his system were found, what's your interpretation on why Brau 1589 killed humans?
|
|
|
Pluto
Nov 19, 2015 5:58:17 GMT
Post by Alexis Iguina on Nov 19, 2015 5:58:17 GMT
I think that maybe a human programmed the robot to kill a human. The manga seems to create a society where the humans control robots even after they have equal rights. Also, murder is usually done when the killer feels negative things such as resentment, hatred and the such. At this point, the robots haven't evolved to feel anything even though they try to. So it was probably a human who like in the war used a robot to carry out his action.
|
|
|
Pluto
Nov 19, 2015 17:17:24 GMT
Post by Alan Wong on Nov 19, 2015 17:17:24 GMT
I also thought what Alexis did: that somebody programmed the robot to kill a human. Generally, programming an AI or creating a program requires you to have very specific code and there's little room for error. I can't imagine that it would just slip by the creators and manufacturers that the robot did not have something telling them not to harm humans. On the other hand, maybe it was because the creators were not specific enough. Maybe when the robot killed the human, it did not interpret the action as causing "harm." Perhaps the creators did not anticipate how much thinking or reasoning the robots could be capable of during the creation process.
|
|