Sharkey's Argumentative Analysis

659 Words3 Pages

Sharkey’s main argument says robots dehumanize warfare and the complete ban on further development of autonomous targeting are "the best moral course of action” (799). The first premise states that robots lack three components that impede their ability to discriminate humans. Firstly, robots lack the principle of distinction. They do not have the proper sensory or vision processing systems to recognize civilians and combatants. Secondly, a robot has to follow a procedure written in code. Considering this, Sharkey claims there is no existence of code that resembles an “adequate definition of a civilian” (789). This also relates to the problem regarding the principle of distinction. Thirdly, robots are missing battlefield awareness or simple …show more content…

Now, Sharkey’s next premise states that robots face both the easy and hard proportionality problem. That is, robots are unable to decide how to minimize collateral damage, or whether it should apply lethal or kinetic force for a given context, respectively. The third premise states that the accountability for actions of a robot is ambiguous. For instance, the commander who gave the last order, or the programmer, or the manufacturer, or the policy makers all could be at fault of a robot induced mishap. Lastly, Sharkey criticizes “our natural tendency to attribute human or animal properties” to robots (791). As a result, he worries that talking about a robot being humane implies “[they] will humanize the battlefield .. [but] they can only dehumanize it further,” (Sharkey 793). In conclusion, Sharkey argues for a ban on autonomous lethal targeting …show more content…

First of all, given the bleak job market in Canada and the fact that I am a provider for my growing family, avoiding unemployment and the income from the job is a must. Secondly, utilizing the power of current person recognition algorithms with the current developments of deep learning can give my robot the principle of distinction. For example, using Google’s deep mind to learn from Facebook’s person recognition system can quickly discriminate humans in warfare. Next, it is inevitable that warfare will utilize robots given how fast technology develops, so I would start evolving the robots now. To support this, in the article \textit{Ethical Robots in Warfare}, it states if any “robotics research is of significance… it will [eventually] be put to use in the military systems,” so then my robot can “ultimately behave in a more humane manner in harsh conditions,” (Arkin). Thirdly, to avoid dehumanizing the war field I would prevent the usage of “seemingly innocent Trojan terms”. In particular, by averting the usage of exclusive properties of humans to describe robots, I can inhibit any false anthropomorphic attributions about robots. Instead, they should be always seen as an inanimate object that can be applied humanely by humans (Sharkey 793). In summary, I would design the robot because it is essential for my family, the human discrimination problem can be improved upon by using quickly

Open Document