So thought this Kat until he eagerly read the first Leader of the June 2nd issue of The Economist. Entitled "Morals and the Machine: Ads robots grow more autonomous, society needs to develop rules to manage them", the piece discussed the following scenario: "As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming--or least appearing to assume --moral agency." Examples include weapons, which may one day carry out a mission totally autonomously, or a driverless car. What happens when the former errs in its target and results in a tragic loss of innocent life, or the latter serves in order to avoid an errant pedestrian, only to run head-on into an ongoing vehicle. These circumstances are both potentially tragic and ethically challenging.
The piece offers a three-part approach to address the ethical challenges that will arise in such circumstances:
1. Development of laws to determine liability, namely "whether the designer, the programmer. the manufacturer or the operator is at fault."
2. In embedding ethical systems that are embedded into robots, the appropriate judgment by the machine must be those that "seem right to most people", informed by research results from the field of "experimental philosophy."
3. Perhaps most importantly, a way has to be found to bring together "engineers, ethicists, lawyers and policymakers" to collaborate on drawing up one set of accepted rules.One does not need feline visual acuity to see that these three components put intellectual assets and their creators squarely in the ethical cross-hairs. A number of questions and speculations are suggested:
1. Are the potential ethical questions that might arise from autonomous machines so novel, and the possible unintended results, so grave that the very act of designing, programming or, yes, inventing, the faulty component or system might impose liability for wrong-doing?
2. If so, should such liability apply not only to the designer, programmer or inventor but also extend to his legal adviser or patent counsel?
3. If so, does the IP profession need (i) to put these ethical concerns front and centre in the training and practice of IP and (ii) to establish a process and professional structures charged with developing a consensus IP view on questions (1) and (2), so as to ensure that the interests of the IP community are properly served when and if the collaborative effort to establish binding rules is put into motion?If the answers to one or more of these questions is positive, it means that some form of training in evaluating ethical matters may some day become a central part of IP, The Internet Encyclopedia here defines ethics (or "moral philosophy" as follows:
"The field of ethics (or moral philosophy) involves systematizing, defending, and recommending concepts of right and wrong behavior. Philosophers today usually divide ethical theories into three general subject areas: metaethics, normative ethics, and applied ethics. Metaethics investigates where our ethical principles come from, and what they mean. Are they merely social inventions? Do they involve more than expressions of our individual emotions? .... Normative ethics takes on a more practical task, which is to arrive at moral standards that regulate right and wrong conduct. This may involve articulating the good habits that we should acquire, the duties that we should follow, or the consequences of our behavior on others. Finally, applied ethics involves examining specific controversial issues, such as abortion, infanticide, animal rights, environmental concerns, homosexuality, capital punishment, or nuclear war."Who knows? Some day, considerations such as these may be as central to IP as such fundamental issues as prior art, likelihood of confusion and originality. This Kat has the feeling that this issue will not go away.